Test Report: QEMU_macOS 17703

                    
                      e76ebe347b3a1e1a0d734b84313c6ab0b6541a2c:2023-12-01:32109
                    
                

Test fail (142/247)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.19
7 TestDownloadOnly/v1.16.0/kubectl 0
27 TestOffline 10
32 TestAddons/Setup 10.57
33 TestCertOptions 10.16
34 TestCertExpiration 195.51
35 TestDockerFlags 10.1
36 TestForceSystemdFlag 11.36
37 TestForceSystemdEnv 10.15
43 TestErrorSpam/setup 9.92
52 TestFunctional/serial/StartWithProxy 9.97
54 TestFunctional/serial/SoftStart 5.29
55 TestFunctional/serial/KubeContext 0.06
56 TestFunctional/serial/KubectlGetPods 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
64 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.66
68 TestFunctional/serial/ExtraConfig 5.29
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 0.1
71 TestFunctional/serial/LogsFileCmd 0.09
72 TestFunctional/serial/InvalidService 0.03
75 TestFunctional/parallel/DashboardCmd 0.21
78 TestFunctional/parallel/StatusCmd 0.13
82 TestFunctional/parallel/ServiceCmdConnect 0.13
84 TestFunctional/parallel/PersistentVolumeClaim 0.03
86 TestFunctional/parallel/SSHCmd 0.14
87 TestFunctional/parallel/CpCmd 0.21
89 TestFunctional/parallel/FileSync 0.08
90 TestFunctional/parallel/CertSync 0.31
94 TestFunctional/parallel/NodeLabels 0.06
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
100 TestFunctional/parallel/Version/components 0.05
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
105 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
107 TestFunctional/parallel/DockerEnv/bash 0.05
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
111 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
112 TestFunctional/parallel/ServiceCmd/List 0.04
113 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
114 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
115 TestFunctional/parallel/ServiceCmd/Format 0.05
116 TestFunctional/parallel/ServiceCmd/URL 0.05
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 91.63
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.62
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.67
146 TestImageBuild/serial/Setup 9.87
148 TestIngressAddonLegacy/StartLegacyK8sCluster 17.01
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 0.13
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.03
155 TestJSONOutput/start/Command 9.92
161 TestJSONOutput/pause/Command 0.09
167 TestJSONOutput/unpause/Command 0.05
184 TestMinikubeProfile 10.31
187 TestMountStart/serial/StartWithMountFirst 9.96
190 TestMultiNode/serial/FreshStart2Nodes 10.05
191 TestMultiNode/serial/DeployApp2Nodes 115.12
192 TestMultiNode/serial/PingHostFrom2Pods 0.09
193 TestMultiNode/serial/AddNode 0.11
194 TestMultiNode/serial/MultiNodeLabels 0.06
195 TestMultiNode/serial/ProfileList 0.1
196 TestMultiNode/serial/CopyFile 0.06
197 TestMultiNode/serial/StopNode 0.15
198 TestMultiNode/serial/StartAfterStop 0.12
199 TestMultiNode/serial/RestartKeepsNodes 5.41
200 TestMultiNode/serial/DeleteNode 0.11
201 TestMultiNode/serial/StopMultiNode 0.17
202 TestMultiNode/serial/RestartMultiNode 5.27
203 TestMultiNode/serial/ValidateNameConflict 20.28
207 TestPreload 10.03
209 TestScheduledStopUnix 10.01
210 TestSkaffold 12.15
213 TestRunningBinaryUpgrade 145.41
215 TestKubernetesUpgrade 15.38
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.61
229 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.42
230 TestStoppedBinaryUpgrade/Setup 144.51
232 TestPause/serial/Start 9.9
242 TestNoKubernetes/serial/StartWithK8s 10.01
243 TestNoKubernetes/serial/StartWithStopK8s 5.35
244 TestNoKubernetes/serial/Start 5.34
248 TestNoKubernetes/serial/StartNoArgs 5.36
250 TestNetworkPlugins/group/auto/Start 9.98
251 TestNetworkPlugins/group/kindnet/Start 9.85
252 TestNetworkPlugins/group/calico/Start 9.89
253 TestNetworkPlugins/group/custom-flannel/Start 9.92
254 TestNetworkPlugins/group/false/Start 9.95
255 TestNetworkPlugins/group/enable-default-cni/Start 9.8
256 TestNetworkPlugins/group/flannel/Start 9.8
257 TestNetworkPlugins/group/bridge/Start 10.2
258 TestStoppedBinaryUpgrade/Upgrade 2.22
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.16
260 TestNetworkPlugins/group/kubenet/Start 9.95
262 TestStartStop/group/old-k8s-version/serial/FirstStart 12.18
264 TestStartStop/group/no-preload/serial/FirstStart 9.94
265 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
266 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
269 TestStartStop/group/old-k8s-version/serial/SecondStart 7.2
270 TestStartStop/group/no-preload/serial/DeployApp 0.09
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
274 TestStartStop/group/no-preload/serial/SecondStart 5.22
275 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
276 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
277 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
278 TestStartStop/group/old-k8s-version/serial/Pause 0.11
279 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
280 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
281 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
283 TestStartStop/group/embed-certs/serial/FirstStart 10.04
284 TestStartStop/group/no-preload/serial/Pause 0.12
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.91
287 TestStartStop/group/embed-certs/serial/DeployApp 0.1
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
291 TestStartStop/group/embed-certs/serial/SecondStart 7.33
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
297 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/embed-certs/serial/Pause 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/newest-cni/serial/FirstStart 9.94
306 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.13
311 TestStartStop/group/newest-cni/serial/SecondStart 5.29
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (14.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.193411958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0aa95278-9dd9-4f1b-882c-615b9d722f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-993000] minikube v1.32.0 on Darwin 14.1.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"837c8c05-f990-4f4b-8570-485546b2e61b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17703"}}
	{"specversion":"1.0","id":"e9dc5466-c8cc-479a-a5be-4a9e79747bc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig"}}
	{"specversion":"1.0","id":"446eea14-8da9-4f6d-85f3-13aff793f466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9dcf8379-0d3c-4b45-b6c7-230b8384a6fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80e08ab7-ddfd-4e02-a3f1-fdf3932592a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube"}}
	{"specversion":"1.0","id":"cc474d47-0ce3-4974-9941-36127cf43397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8f0627c3-15f9-4ed0-b3cb-27418b9456eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f26202bc-d8ac-4cb4-b79f-8d1f48bd073a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7088476e-5ae4-4c14-b9f1-85c5a21dd5b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"391ecfd4-935b-4bba-9905-e8d6f4384f2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-993000 in cluster download-only-993000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5dd9bda-7389-498c-8b6a-2ebb7775dd43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c16f397-2dc5-4b98-b904-f5fb1a74d8e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80] Decompressors:map[bz2:0x14000801060 gz:0x14000801068 tar:0x14000801010 tar.bz2:0x14000801020 tar.gz:0x14000801030 tar.xz:0x14000801040 tar.zst:0x14000801050 tbz2:0x14000801020 tgz:0x140008
01030 txz:0x14000801040 tzst:0x14000801050 xz:0x14000801070 zip:0x14000801080 zst:0x14000801078] Getters:map[file:0x1400261c770 http:0x14000516190 https:0x14000516500] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"293a762e-39ce-4fae-86fa-ed24490089fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:03:06.058288    5827 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:03:06.058458    5827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:06.058462    5827 out.go:309] Setting ErrFile to fd 2...
	I1201 10:03:06.058464    5827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:06.058581    5827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	W1201 10:03:06.058665    5827 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: no such file or directory
	I1201 10:03:06.059916    5827 out.go:303] Setting JSON to true
	I1201 10:03:06.077061    5827 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1960,"bootTime":1701451826,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:03:06.077136    5827 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:03:06.084877    5827 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:03:06.088799    5827 out.go:169] MINIKUBE_LOCATION=17703
	I1201 10:03:06.085024    5827 notify.go:220] Checking for updates...
	W1201 10:03:06.085070    5827 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball: no such file or directory
	I1201 10:03:06.111827    5827 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:03:06.115878    5827 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:03:06.123831    5827 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:03:06.131851    5827 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	W1201 10:03:06.138930    5827 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 10:03:06.139163    5827 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:03:06.143658    5827 out.go:97] Using the qemu2 driver based on user configuration
	I1201 10:03:06.143665    5827 start.go:298] selected driver: qemu2
	I1201 10:03:06.143670    5827 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:03:06.143717    5827 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:03:06.147846    5827 out.go:169] Automatically selected the socket_vmnet network
	I1201 10:03:06.154612    5827 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1201 10:03:06.154721    5827 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 10:03:06.154856    5827 cni.go:84] Creating CNI manager for ""
	I1201 10:03:06.154878    5827 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:03:06.154885    5827 start_flags.go:323] config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:06.160158    5827 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:03:06.162879    5827 out.go:97] Downloading VM boot image ...
	I1201 10:03:06.162901    5827 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso
	I1201 10:03:13.658964    5827 out.go:97] Starting control plane node download-only-993000 in cluster download-only-993000
	I1201 10:03:13.658996    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:13.715667    5827 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:03:13.715684    5827 cache.go:56] Caching tarball of preloaded images
	I1201 10:03:13.715859    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:13.720014    5827 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1201 10:03:13.720022    5827 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:13.794571    5827 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:03:19.159736    5827 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:19.159889    5827 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:19.801020    5827 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1201 10:03:19.801233    5827 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/download-only-993000/config.json ...
	I1201 10:03:19.801249    5827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/download-only-993000/config.json: {Name:mk1c39f52642e4a0152308e0d2fa63bca04e3751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:03:19.801457    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:19.801631    5827 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1201 10:03:20.168577    5827 out.go:169] 
	W1201 10:03:20.178774    5827 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80] Decompressors:map[bz2:0x14000801060 gz:0x14000801068 tar:0x14000801010 tar.bz2:0x14000801020 tar.gz:0x14000801030 tar.xz:0x14000801040 tar.zst:0x14000801050 tbz2:0x14000801020 tgz:0x14000801030 txz:0x14000801040 tzst:0x14000801050 xz:0x14000801070 zip:0x14000801080 zst:0x14000801078] Getters:map[file:0x1400261c770 http:0x14000516190 https:0x14000516500] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1201 10:03:20.178797    5827 out_reason.go:110] 
	W1201 10:03:20.190646    5827 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:03:20.193652    5827 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-993000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (14.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:163: expected the file for binary exist at "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-242000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-242000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.82992375s)

                                                
                                                
-- stdout --
	* [offline-docker-242000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-242000 in cluster offline-docker-242000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-242000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:11:09.332324    7013 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:11:09.332475    7013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:09.332478    7013 out.go:309] Setting ErrFile to fd 2...
	I1201 10:11:09.332481    7013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:09.332606    7013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:11:09.333778    7013 out.go:303] Setting JSON to false
	I1201 10:11:09.351202    7013 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2443,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:11:09.351303    7013 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:11:09.357129    7013 out.go:177] * [offline-docker-242000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:11:09.368036    7013 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:11:09.365186    7013 notify.go:220] Checking for updates...
	I1201 10:11:09.376046    7013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:11:09.384059    7013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:11:09.392076    7013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:11:09.400045    7013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:11:09.408066    7013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:11:09.412482    7013 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:09.412550    7013 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:11:09.416019    7013 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:11:09.422896    7013 start.go:298] selected driver: qemu2
	I1201 10:11:09.422906    7013 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:11:09.422913    7013 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:11:09.424893    7013 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:11:09.428040    7013 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:11:09.431207    7013 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:11:09.431250    7013 cni.go:84] Creating CNI manager for ""
	I1201 10:11:09.431257    7013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:11:09.431261    7013 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:11:09.431267    7013 start_flags.go:323] config:
	{Name:offline-docker-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:11:09.435911    7013 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:11:09.443994    7013 out.go:177] * Starting control plane node offline-docker-242000 in cluster offline-docker-242000
	I1201 10:11:09.448040    7013 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:11:09.448083    7013 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:11:09.448092    7013 cache.go:56] Caching tarball of preloaded images
	I1201 10:11:09.448192    7013 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:11:09.448198    7013 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:11:09.448272    7013 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/offline-docker-242000/config.json ...
	I1201 10:11:09.448289    7013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/offline-docker-242000/config.json: {Name:mk4bca089c7ff6d09640b58e59ac61af2efa3ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:11:09.448514    7013 start.go:365] acquiring machines lock for offline-docker-242000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:09.448547    7013 start.go:369] acquired machines lock for "offline-docker-242000" in 24.75µs
	I1201 10:11:09.448570    7013 start.go:93] Provisioning new machine with config: &{Name:offline-docker-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:09.448605    7013 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:09.457067    7013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:09.472179    7013 start.go:159] libmachine.API.Create for "offline-docker-242000" (driver="qemu2")
	I1201 10:11:09.472211    7013 client.go:168] LocalClient.Create starting
	I1201 10:11:09.472290    7013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:09.472324    7013 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:09.472337    7013 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:09.472378    7013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:09.472399    7013 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:09.472406    7013 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:09.472739    7013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:09.610257    7013 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:09.651550    7013 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:09.651557    7013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:09.651710    7013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:09.664325    7013 main.go:141] libmachine: STDOUT: 
	I1201 10:11:09.664425    7013 main.go:141] libmachine: STDERR: 
	I1201 10:11:09.664588    7013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2 +20000M
	I1201 10:11:09.676750    7013 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:09.676780    7013 main.go:141] libmachine: STDERR: 
	I1201 10:11:09.676797    7013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:09.676804    7013 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:09.676832    7013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b7:50:3c:a3:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:09.678706    7013 main.go:141] libmachine: STDOUT: 
	I1201 10:11:09.678728    7013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:09.678748    7013 client.go:171] LocalClient.Create took 206.534958ms
	I1201 10:11:11.680765    7013 start.go:128] duration metric: createHost completed in 2.232207458s
	I1201 10:11:11.680790    7013 start.go:83] releasing machines lock for "offline-docker-242000", held for 2.232277541s
	W1201 10:11:11.680806    7013 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:11.687204    7013 out.go:177] * Deleting "offline-docker-242000" in qemu2 ...
	W1201 10:11:11.701813    7013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:11.701825    7013 start.go:709] Will try again in 5 seconds ...
	I1201 10:11:16.703951    7013 start.go:365] acquiring machines lock for offline-docker-242000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:16.704340    7013 start.go:369] acquired machines lock for "offline-docker-242000" in 284.583µs
	I1201 10:11:16.704444    7013 start.go:93] Provisioning new machine with config: &{Name:offline-docker-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:16.704771    7013 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:16.718132    7013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:16.763677    7013 start.go:159] libmachine.API.Create for "offline-docker-242000" (driver="qemu2")
	I1201 10:11:16.763730    7013 client.go:168] LocalClient.Create starting
	I1201 10:11:16.763863    7013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:16.763926    7013 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:16.763942    7013 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:16.764003    7013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:16.764046    7013 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:16.764061    7013 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:16.764540    7013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:16.906149    7013 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:17.048201    7013 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:17.048208    7013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:17.048393    7013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:17.060482    7013 main.go:141] libmachine: STDOUT: 
	I1201 10:11:17.060501    7013 main.go:141] libmachine: STDERR: 
	I1201 10:11:17.060564    7013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2 +20000M
	I1201 10:11:17.070939    7013 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:17.070955    7013 main.go:141] libmachine: STDERR: 
	I1201 10:11:17.070968    7013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:17.070972    7013 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:17.071012    7013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:db:81:90:2b:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/offline-docker-242000/disk.qcow2
	I1201 10:11:17.072533    7013 main.go:141] libmachine: STDOUT: 
	I1201 10:11:17.072553    7013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:17.072566    7013 client.go:171] LocalClient.Create took 308.837458ms
	I1201 10:11:19.074740    7013 start.go:128] duration metric: createHost completed in 2.369994s
	I1201 10:11:19.074797    7013 start.go:83] releasing machines lock for "offline-docker-242000", held for 2.370489583s
	W1201 10:11:19.075241    7013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:19.099504    7013 out.go:177] 
	W1201 10:11:19.103206    7013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:11:19.103233    7013 out.go:239] * 
	* 
	W1201 10:11:19.104543    7013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:11:19.119966    7013 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-242000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-12-01 10:11:19.131229 -0800 PST m=+493.172283960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-242000 -n offline-docker-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-242000 -n offline-docker-242000: exit status 7 (60.752667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-242000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-242000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-242000
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestAddons/Setup (10.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.564272042s)

                                                
                                                
-- stdout --
	* [addons-659000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-659000 in cluster addons-659000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-659000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:03:36.660983    5914 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:03:36.661110    5914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:36.661112    5914 out.go:309] Setting ErrFile to fd 2...
	I1201 10:03:36.661115    5914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:36.661253    5914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:03:36.664355    5914 out.go:303] Setting JSON to false
	I1201 10:03:36.681330    5914 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1990,"bootTime":1701451826,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:03:36.681403    5914 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:03:36.685666    5914 out.go:177] * [addons-659000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:03:36.692301    5914 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:03:36.692360    5914 notify.go:220] Checking for updates...
	I1201 10:03:36.704301    5914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:03:36.707231    5914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:03:36.710316    5914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:03:36.714298    5914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:03:36.717309    5914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:03:36.720468    5914 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:03:36.725190    5914 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:03:36.732305    5914 start.go:298] selected driver: qemu2
	I1201 10:03:36.732312    5914 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:03:36.732318    5914 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:03:36.734768    5914 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:03:36.739261    5914 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:03:36.742339    5914 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:03:36.742370    5914 cni.go:84] Creating CNI manager for ""
	I1201 10:03:36.742378    5914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:03:36.742385    5914 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:03:36.742391    5914 start_flags.go:323] config:
	{Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:36.747163    5914 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:03:36.755323    5914 out.go:177] * Starting control plane node addons-659000 in cluster addons-659000
	I1201 10:03:36.759146    5914 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:03:36.759171    5914 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:03:36.759179    5914 cache.go:56] Caching tarball of preloaded images
	I1201 10:03:36.759237    5914 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:03:36.759243    5914 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:03:36.759457    5914 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/addons-659000/config.json ...
	I1201 10:03:36.759469    5914 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/addons-659000/config.json: {Name:mk486a89c0d2df6496e1af78dac852b050279d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:03:36.759672    5914 start.go:365] acquiring machines lock for addons-659000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:03:36.759739    5914 start.go:369] acquired machines lock for "addons-659000" in 61.542µs
	I1201 10:03:36.759752    5914 start.go:93] Provisioning new machine with config: &{Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:03:36.759791    5914 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:03:36.764313    5914 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1201 10:03:36.783691    5914 start.go:159] libmachine.API.Create for "addons-659000" (driver="qemu2")
	I1201 10:03:36.783738    5914 client.go:168] LocalClient.Create starting
	I1201 10:03:36.783882    5914 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:03:36.980221    5914 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:03:37.398636    5914 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:03:37.674088    5914 main.go:141] libmachine: Creating SSH key...
	I1201 10:03:37.746137    5914 main.go:141] libmachine: Creating Disk image...
	I1201 10:03:37.746142    5914 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:03:37.746328    5914 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:37.758406    5914 main.go:141] libmachine: STDOUT: 
	I1201 10:03:37.758430    5914 main.go:141] libmachine: STDERR: 
	I1201 10:03:37.758488    5914 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2 +20000M
	I1201 10:03:37.769126    5914 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:03:37.769159    5914 main.go:141] libmachine: STDERR: 
	I1201 10:03:37.769176    5914 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:37.769182    5914 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:03:37.769225    5914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:2d:6a:21:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:37.770968    5914 main.go:141] libmachine: STDOUT: 
	I1201 10:03:37.770987    5914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:03:37.771007    5914 client.go:171] LocalClient.Create took 987.284459ms
	I1201 10:03:39.773124    5914 start.go:128] duration metric: createHost completed in 3.013380292s
	I1201 10:03:39.773200    5914 start.go:83] releasing machines lock for "addons-659000", held for 3.013523083s
	W1201 10:03:39.773261    5914 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:03:39.792495    5914 out.go:177] * Deleting "addons-659000" in qemu2 ...
	W1201 10:03:39.811046    5914 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:03:39.811080    5914 start.go:709] Will try again in 5 seconds ...
	I1201 10:03:44.813245    5914 start.go:365] acquiring machines lock for addons-659000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:03:44.813868    5914 start.go:369] acquired machines lock for "addons-659000" in 473.875µs
	I1201 10:03:44.814030    5914 start.go:93] Provisioning new machine with config: &{Name:addons-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-659000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:03:44.814327    5914 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:03:44.843975    5914 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1201 10:03:44.890186    5914 start.go:159] libmachine.API.Create for "addons-659000" (driver="qemu2")
	I1201 10:03:44.890220    5914 client.go:168] LocalClient.Create starting
	I1201 10:03:44.890348    5914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:03:44.890404    5914 main.go:141] libmachine: Decoding PEM data...
	I1201 10:03:44.890419    5914 main.go:141] libmachine: Parsing certificate...
	I1201 10:03:44.890508    5914 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:03:44.890550    5914 main.go:141] libmachine: Decoding PEM data...
	I1201 10:03:44.890564    5914 main.go:141] libmachine: Parsing certificate...
	I1201 10:03:44.891061    5914 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:03:45.054814    5914 main.go:141] libmachine: Creating SSH key...
	I1201 10:03:45.110959    5914 main.go:141] libmachine: Creating Disk image...
	I1201 10:03:45.110965    5914 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:03:45.111166    5914 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:45.122989    5914 main.go:141] libmachine: STDOUT: 
	I1201 10:03:45.123016    5914 main.go:141] libmachine: STDERR: 
	I1201 10:03:45.123071    5914 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2 +20000M
	I1201 10:03:45.133778    5914 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:03:45.133795    5914 main.go:141] libmachine: STDERR: 
	I1201 10:03:45.133806    5914 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:45.133812    5914 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:03:45.133852    5914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:d9:13:26:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/addons-659000/disk.qcow2
	I1201 10:03:45.135548    5914 main.go:141] libmachine: STDOUT: 
	I1201 10:03:45.135563    5914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:03:45.135578    5914 client.go:171] LocalClient.Create took 245.358917ms
	I1201 10:03:47.137730    5914 start.go:128] duration metric: createHost completed in 2.323420375s
	I1201 10:03:47.137812    5914 start.go:83] releasing machines lock for "addons-659000", held for 2.323972625s
	W1201 10:03:47.138158    5914 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-659000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:03:47.154829    5914 out.go:177] 
	W1201 10:03:47.163047    5914 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:03:47.163103    5914 out.go:239] * 
	* 
	W1201 10:03:47.165657    5914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:03:47.176760    5914 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.57s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-625000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-625000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.865214125s)

                                                
                                                
-- stdout --
	* [cert-options-625000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-625000 in cluster cert-options-625000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-625000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-625000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-625000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-625000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-625000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.848834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-625000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-625000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-625000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-625000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-625000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (48.474583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-625000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-625000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-625000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-12-01 10:11:49.574521 -0800 PST m=+523.616299460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-625000 -n cert-options-625000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-625000 -n cert-options-625000: exit status 7 (31.553833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-625000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-625000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.052457917s)

                                                
                                                
-- stdout --
	* [cert-expiration-650000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-650000 in cluster cert-expiration-650000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-650000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-650000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.283333583s)

                                                
                                                
-- stdout --
	* [cert-expiration-650000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-650000 in cluster cert-expiration-650000
	* Restarting existing qemu2 VM for "cert-expiration-650000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-650000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-650000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-650000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-650000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-650000 in cluster cert-expiration-650000
	* Restarting existing qemu2 VM for "cert-expiration-650000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-650000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-650000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-12-01 10:14:49.803372 -0800 PST m=+703.849436460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-650000 -n cert-expiration-650000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-650000 -n cert-expiration-650000: exit status 7 (69.640917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-650000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-650000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-650000
--- FAIL: TestCertExpiration (195.51s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-025000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-025000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.820673958s)

                                                
                                                
-- stdout --
	* [docker-flags-025000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-025000 in cluster docker-flags-025000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-025000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:11:29.477007    7214 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:11:29.477133    7214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:29.477136    7214 out.go:309] Setting ErrFile to fd 2...
	I1201 10:11:29.477138    7214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:29.477255    7214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:11:29.478351    7214 out.go:303] Setting JSON to false
	I1201 10:11:29.494405    7214 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2463,"bootTime":1701451826,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:11:29.494487    7214 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:11:29.500126    7214 out.go:177] * [docker-flags-025000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:11:29.511989    7214 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:11:29.508013    7214 notify.go:220] Checking for updates...
	I1201 10:11:29.520044    7214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:11:29.527916    7214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:11:29.531957    7214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:11:29.535862    7214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:11:29.538936    7214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:11:29.542488    7214 config.go:182] Loaded profile config "force-systemd-flag-442000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:29.542573    7214 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:29.542629    7214 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:11:29.546932    7214 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:11:29.553797    7214 start.go:298] selected driver: qemu2
	I1201 10:11:29.553803    7214 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:11:29.553810    7214 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:11:29.556360    7214 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:11:29.559919    7214 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:11:29.564090    7214 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1201 10:11:29.564158    7214 cni.go:84] Creating CNI manager for ""
	I1201 10:11:29.564166    7214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:11:29.564170    7214 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:11:29.564175    7214 start_flags.go:323] config:
	{Name:docker-flags-025000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:11:29.569098    7214 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:11:29.576004    7214 out.go:177] * Starting control plane node docker-flags-025000 in cluster docker-flags-025000
	I1201 10:11:29.578974    7214 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:11:29.579007    7214 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:11:29.579016    7214 cache.go:56] Caching tarball of preloaded images
	I1201 10:11:29.579109    7214 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:11:29.579117    7214 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:11:29.579188    7214 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/docker-flags-025000/config.json ...
	I1201 10:11:29.579210    7214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/docker-flags-025000/config.json: {Name:mka101748eb1a4f22da86d6a5dc63ecd6cfbf222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:11:29.579497    7214 start.go:365] acquiring machines lock for docker-flags-025000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:29.579534    7214 start.go:369] acquired machines lock for "docker-flags-025000" in 29.333µs
	I1201 10:11:29.579547    7214 start.go:93] Provisioning new machine with config: &{Name:docker-flags-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:29.579580    7214 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:29.586879    7214 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:29.603818    7214 start.go:159] libmachine.API.Create for "docker-flags-025000" (driver="qemu2")
	I1201 10:11:29.603848    7214 client.go:168] LocalClient.Create starting
	I1201 10:11:29.603909    7214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:29.603939    7214 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:29.603951    7214 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:29.603987    7214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:29.604009    7214 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:29.604019    7214 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:29.604347    7214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:29.735659    7214 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:29.828339    7214 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:29.828345    7214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:29.828513    7214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:29.840851    7214 main.go:141] libmachine: STDOUT: 
	I1201 10:11:29.840871    7214 main.go:141] libmachine: STDERR: 
	I1201 10:11:29.840933    7214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2 +20000M
	I1201 10:11:29.851393    7214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:29.851412    7214 main.go:141] libmachine: STDERR: 
	I1201 10:11:29.851426    7214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:29.851431    7214 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:29.851473    7214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f6:77:c1:60:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:29.853194    7214 main.go:141] libmachine: STDOUT: 
	I1201 10:11:29.853207    7214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:29.853228    7214 client.go:171] LocalClient.Create took 249.378583ms
	I1201 10:11:31.855353    7214 start.go:128] duration metric: createHost completed in 2.275805583s
	I1201 10:11:31.855411    7214 start.go:83] releasing machines lock for "docker-flags-025000", held for 2.275918542s
	W1201 10:11:31.855494    7214 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:31.885061    7214 out.go:177] * Deleting "docker-flags-025000" in qemu2 ...
	W1201 10:11:31.904969    7214 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:31.904993    7214 start.go:709] Will try again in 5 seconds ...
	I1201 10:11:36.907032    7214 start.go:365] acquiring machines lock for docker-flags-025000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:36.907370    7214 start.go:369] acquired machines lock for "docker-flags-025000" in 253.417µs
	I1201 10:11:36.907472    7214 start.go:93] Provisioning new machine with config: &{Name:docker-flags-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:36.907719    7214 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:36.915448    7214 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:36.957938    7214 start.go:159] libmachine.API.Create for "docker-flags-025000" (driver="qemu2")
	I1201 10:11:36.958015    7214 client.go:168] LocalClient.Create starting
	I1201 10:11:36.958127    7214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:36.958184    7214 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:36.958201    7214 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:36.958260    7214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:36.958300    7214 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:36.958311    7214 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:36.958762    7214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:37.100562    7214 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:37.179620    7214 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:37.179627    7214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:37.179794    7214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:37.191968    7214 main.go:141] libmachine: STDOUT: 
	I1201 10:11:37.191990    7214 main.go:141] libmachine: STDERR: 
	I1201 10:11:37.192049    7214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2 +20000M
	I1201 10:11:37.203338    7214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:37.203359    7214 main.go:141] libmachine: STDERR: 
	I1201 10:11:37.203370    7214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:37.203376    7214 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:37.203409    7214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2f:08:94:74:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/docker-flags-025000/disk.qcow2
	I1201 10:11:37.205146    7214 main.go:141] libmachine: STDOUT: 
	I1201 10:11:37.205168    7214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:37.205190    7214 client.go:171] LocalClient.Create took 247.176333ms
	I1201 10:11:39.207318    7214 start.go:128] duration metric: createHost completed in 2.299609167s
	I1201 10:11:39.207369    7214 start.go:83] releasing machines lock for "docker-flags-025000", held for 2.300031167s
	W1201 10:11:39.207780    7214 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:39.233223    7214 out.go:177] 
	W1201 10:11:39.237432    7214 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:11:39.237464    7214 out.go:239] * 
	* 
	W1201 10:11:39.239444    7214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:11:39.255392    7214 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-025000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-025000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-025000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (85.95175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-025000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-025000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-025000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-025000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-025000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-025000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (53.578334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-025000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-025000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-025000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-025000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-12-01 10:11:39.410676 -0800 PST m=+513.452212626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-025000 -n docker-flags-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-025000 -n docker-flags-025000: exit status 7 (36.169041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-025000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-025000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (11.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-442000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-442000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.128503667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-442000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-442000 in cluster force-systemd-flag-442000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-442000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:11:23.141449    7190 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:11:23.141598    7190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:23.141601    7190 out.go:309] Setting ErrFile to fd 2...
	I1201 10:11:23.141604    7190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:23.141723    7190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:11:23.142836    7190 out.go:303] Setting JSON to false
	I1201 10:11:23.158913    7190 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2457,"bootTime":1701451826,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:11:23.159015    7190 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:11:23.165886    7190 out.go:177] * [force-systemd-flag-442000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:11:23.178833    7190 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:11:23.173874    7190 notify.go:220] Checking for updates...
	I1201 10:11:23.185801    7190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:11:23.192740    7190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:11:23.200743    7190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:11:23.204745    7190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:11:23.211826    7190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:11:23.216080    7190 config.go:182] Loaded profile config "force-systemd-env-043000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:23.216149    7190 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:23.216197    7190 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:11:23.219786    7190 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:11:23.227766    7190 start.go:298] selected driver: qemu2
	I1201 10:11:23.227773    7190 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:11:23.227778    7190 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:11:23.230309    7190 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:11:23.234743    7190 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:11:23.236297    7190 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 10:11:23.236341    7190 cni.go:84] Creating CNI manager for ""
	I1201 10:11:23.236349    7190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:11:23.236353    7190 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:11:23.236360    7190 start_flags.go:323] config:
	{Name:force-systemd-flag-442000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:11:23.240939    7190 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:11:23.247805    7190 out.go:177] * Starting control plane node force-systemd-flag-442000 in cluster force-systemd-flag-442000
	I1201 10:11:23.250791    7190 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:11:23.250821    7190 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:11:23.250836    7190 cache.go:56] Caching tarball of preloaded images
	I1201 10:11:23.250896    7190 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:11:23.250907    7190 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:11:23.250980    7190 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/force-systemd-flag-442000/config.json ...
	I1201 10:11:23.250992    7190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/force-systemd-flag-442000/config.json: {Name:mk81be6476e72b663a768cc814066eac6b6b4b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:11:23.251310    7190 start.go:365] acquiring machines lock for force-systemd-flag-442000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:23.251349    7190 start.go:369] acquired machines lock for "force-systemd-flag-442000" in 30.875µs
	I1201 10:11:23.251362    7190 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:23.251394    7190 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:23.259823    7190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:23.277177    7190 start.go:159] libmachine.API.Create for "force-systemd-flag-442000" (driver="qemu2")
	I1201 10:11:23.277210    7190 client.go:168] LocalClient.Create starting
	I1201 10:11:23.277272    7190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:23.277301    7190 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:23.277310    7190 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:23.277350    7190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:23.277372    7190 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:23.277381    7190 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:23.277718    7190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:23.408523    7190 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:23.446764    7190 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:23.446769    7190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:23.446926    7190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:23.458845    7190 main.go:141] libmachine: STDOUT: 
	I1201 10:11:23.458863    7190 main.go:141] libmachine: STDERR: 
	I1201 10:11:23.458915    7190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2 +20000M
	I1201 10:11:23.469490    7190 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:23.469505    7190 main.go:141] libmachine: STDERR: 
	I1201 10:11:23.469538    7190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:23.469544    7190 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:23.469580    7190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:47:3b:65:dc:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:23.471260    7190 main.go:141] libmachine: STDOUT: 
	I1201 10:11:23.471275    7190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:23.471294    7190 client.go:171] LocalClient.Create took 194.080666ms
	I1201 10:11:25.473428    7190 start.go:128] duration metric: createHost completed in 2.222063792s
	I1201 10:11:25.473480    7190 start.go:83] releasing machines lock for "force-systemd-flag-442000", held for 2.222170125s
	W1201 10:11:25.473547    7190 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:25.480937    7190 out.go:177] * Deleting "force-systemd-flag-442000" in qemu2 ...
	W1201 10:11:25.510844    7190 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:25.510879    7190 start.go:709] Will try again in 5 seconds ...
	I1201 10:11:30.512924    7190 start.go:365] acquiring machines lock for force-systemd-flag-442000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:31.855543    7190 start.go:369] acquired machines lock for "force-systemd-flag-442000" in 1.342553792s
	I1201 10:11:31.855649    7190 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:31.855956    7190 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:31.876677    7190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:31.925031    7190 start.go:159] libmachine.API.Create for "force-systemd-flag-442000" (driver="qemu2")
	I1201 10:11:31.925073    7190 client.go:168] LocalClient.Create starting
	I1201 10:11:31.925190    7190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:31.925256    7190 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:31.925274    7190 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:31.925334    7190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:31.925384    7190 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:31.925395    7190 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:31.925857    7190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:32.069146    7190 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:32.162497    7190 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:32.162503    7190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:32.162669    7190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:32.174729    7190 main.go:141] libmachine: STDOUT: 
	I1201 10:11:32.174748    7190 main.go:141] libmachine: STDERR: 
	I1201 10:11:32.174800    7190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2 +20000M
	I1201 10:11:32.185101    7190 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:32.185117    7190 main.go:141] libmachine: STDERR: 
	I1201 10:11:32.185133    7190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:32.185140    7190 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:32.185184    7190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ce:38:1a:07:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-flag-442000/disk.qcow2
	I1201 10:11:32.186769    7190 main.go:141] libmachine: STDOUT: 
	I1201 10:11:32.186787    7190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:32.186801    7190 client.go:171] LocalClient.Create took 261.726083ms
	I1201 10:11:34.188934    7190 start.go:128] duration metric: createHost completed in 2.333001125s
	I1201 10:11:34.188992    7190 start.go:83] releasing machines lock for "force-systemd-flag-442000", held for 2.333472334s
	W1201 10:11:34.189373    7190 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-442000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-442000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:34.212088    7190 out.go:177] 
	W1201 10:11:34.220140    7190 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:11:34.220174    7190 out.go:239] * 
	* 
	W1201 10:11:34.222136    7190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:11:34.227059    7190 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-442000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-442000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-442000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (88.784666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-442000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-442000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-12-01 10:11:34.332924 -0800 PST m=+508.374340376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-442000 -n force-systemd-flag-442000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-442000 -n force-systemd-flag-442000: exit status 7 (36.277292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-442000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-442000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-442000
--- FAIL: TestForceSystemdFlag (11.36s)

                                                
                                    
x
+
TestForceSystemdEnv (10.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.929359792s)

                                                
                                                
-- stdout --
	* [force-systemd-env-043000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-043000 in cluster force-systemd-env-043000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-043000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:11:19.328341    7156 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:11:19.328761    7156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:19.328764    7156 out.go:309] Setting ErrFile to fd 2...
	I1201 10:11:19.328767    7156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:11:19.328899    7156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:11:19.333496    7156 out.go:303] Setting JSON to false
	I1201 10:11:19.350359    7156 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2453,"bootTime":1701451826,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:11:19.350453    7156 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:11:19.354060    7156 out.go:177] * [force-systemd-env-043000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:11:19.365007    7156 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:11:19.361166    7156 notify.go:220] Checking for updates...
	I1201 10:11:19.373459    7156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:11:19.381006    7156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:11:19.385012    7156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:11:19.392974    7156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:11:19.400988    7156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1201 10:11:19.404316    7156 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:11:19.404361    7156 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:11:19.406918    7156 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:11:19.414999    7156 start.go:298] selected driver: qemu2
	I1201 10:11:19.415005    7156 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:11:19.415010    7156 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:11:19.417380    7156 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:11:19.420016    7156 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:11:19.423086    7156 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 10:11:19.423128    7156 cni.go:84] Creating CNI manager for ""
	I1201 10:11:19.423135    7156 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:11:19.423144    7156 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:11:19.423149    7156 start_flags.go:323] config:
	{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-043000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:11:19.427831    7156 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:11:19.434967    7156 out.go:177] * Starting control plane node force-systemd-env-043000 in cluster force-systemd-env-043000
	I1201 10:11:19.439016    7156 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:11:19.439056    7156 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:11:19.439067    7156 cache.go:56] Caching tarball of preloaded images
	I1201 10:11:19.439131    7156 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:11:19.439137    7156 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:11:19.439212    7156 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/force-systemd-env-043000/config.json ...
	I1201 10:11:19.439223    7156 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/force-systemd-env-043000/config.json: {Name:mk0fca33b0423402727723b9de2adaab9765f231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:11:19.439442    7156 start.go:365] acquiring machines lock for force-systemd-env-043000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:19.439473    7156 start.go:369] acquired machines lock for "force-systemd-env-043000" in 23.875µs
	I1201 10:11:19.439485    7156 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-043000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:19.439516    7156 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:19.443962    7156 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:19.460354    7156 start.go:159] libmachine.API.Create for "force-systemd-env-043000" (driver="qemu2")
	I1201 10:11:19.460381    7156 client.go:168] LocalClient.Create starting
	I1201 10:11:19.460452    7156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:19.460482    7156 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:19.460493    7156 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:19.460544    7156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:19.460574    7156 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:19.460584    7156 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:19.460939    7156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:19.593355    7156 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:19.791244    7156 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:19.791255    7156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:19.791457    7156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:19.804117    7156 main.go:141] libmachine: STDOUT: 
	I1201 10:11:19.804148    7156 main.go:141] libmachine: STDERR: 
	I1201 10:11:19.804217    7156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2 +20000M
	I1201 10:11:19.815298    7156 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:19.815315    7156 main.go:141] libmachine: STDERR: 
	I1201 10:11:19.815339    7156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:19.815347    7156 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:19.815387    7156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:50:df:d7:86:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:19.817139    7156 main.go:141] libmachine: STDOUT: 
	I1201 10:11:19.817155    7156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:19.817178    7156 client.go:171] LocalClient.Create took 356.797917ms
	I1201 10:11:21.819353    7156 start.go:128] duration metric: createHost completed in 2.37986275s
	I1201 10:11:21.819424    7156 start.go:83] releasing machines lock for "force-systemd-env-043000", held for 2.379997791s
	W1201 10:11:21.819493    7156 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:21.835661    7156 out.go:177] * Deleting "force-systemd-env-043000" in qemu2 ...
	W1201 10:11:21.862162    7156 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:21.862200    7156 start.go:709] Will try again in 5 seconds ...
	I1201 10:11:26.864304    7156 start.go:365] acquiring machines lock for force-systemd-env-043000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:11:26.864720    7156 start.go:369] acquired machines lock for "force-systemd-env-043000" in 290.458µs
	I1201 10:11:26.864861    7156 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-043000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:11:26.865086    7156 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:11:26.877447    7156 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1201 10:11:26.926937    7156 start.go:159] libmachine.API.Create for "force-systemd-env-043000" (driver="qemu2")
	I1201 10:11:26.926981    7156 client.go:168] LocalClient.Create starting
	I1201 10:11:26.927135    7156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:11:26.927196    7156 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:26.927216    7156 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:26.927284    7156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:11:26.927325    7156 main.go:141] libmachine: Decoding PEM data...
	I1201 10:11:26.927344    7156 main.go:141] libmachine: Parsing certificate...
	I1201 10:11:26.927821    7156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:11:27.070524    7156 main.go:141] libmachine: Creating SSH key...
	I1201 10:11:27.142151    7156 main.go:141] libmachine: Creating Disk image...
	I1201 10:11:27.142157    7156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:11:27.142316    7156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:27.154222    7156 main.go:141] libmachine: STDOUT: 
	I1201 10:11:27.154239    7156 main.go:141] libmachine: STDERR: 
	I1201 10:11:27.154294    7156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2 +20000M
	I1201 10:11:27.165070    7156 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:11:27.165085    7156 main.go:141] libmachine: STDERR: 
	I1201 10:11:27.165101    7156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:27.165107    7156 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:11:27.165150    7156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:de:35:81:9d:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/force-systemd-env-043000/disk.qcow2
	I1201 10:11:27.166787    7156 main.go:141] libmachine: STDOUT: 
	I1201 10:11:27.166801    7156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:11:27.166813    7156 client.go:171] LocalClient.Create took 239.829833ms
	I1201 10:11:29.169061    7156 start.go:128] duration metric: createHost completed in 2.303988542s
	I1201 10:11:29.169134    7156 start.go:83] releasing machines lock for "force-systemd-env-043000", held for 2.304443334s
	W1201 10:11:29.169534    7156 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:11:29.188200    7156 out.go:177] 
	W1201 10:11:29.198227    7156 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:11:29.198294    7156 out.go:239] * 
	* 
	W1201 10:11:29.200636    7156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:11:29.211090    7156 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-043000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (82.8315ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-043000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-043000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-12-01 10:11:29.311215 -0800 PST m=+503.352512126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-043000 -n force-systemd-env-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-043000 -n force-systemd-env-043000: exit status 7 (35.373417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-043000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-043000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-043000
--- FAIL: TestForceSystemdEnv (10.15s)

                                                
                                    
x
+
TestErrorSpam/setup (9.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-349000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-349000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 --driver=qemu2 : exit status 80 (9.917280167s)

                                                
                                                
-- stdout --
	* [nospam-349000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-349000 in cluster nospam-349000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-349000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-349000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-349000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
- MINIKUBE_LOCATION=17703
- KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-349000 in cluster nospam-349000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-349000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.92s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-149000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.897646917s)

                                                
                                                
-- stdout --
	* [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node functional-149000 in cluster functional-149000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-149000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
- MINIKUBE_LOCATION=17703
- KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node functional-149000 in cluster functional-149000
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-149000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50335 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (69.440792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.97s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-149000 --alsologtostderr -v=8: exit status 80 (5.221075708s)

                                                
                                                
-- stdout --
	* [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-149000 in cluster functional-149000
	* Restarting existing qemu2 VM for "functional-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:04:08.398155    6026 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:04:08.398315    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:04:08.398319    6026 out.go:309] Setting ErrFile to fd 2...
	I1201 10:04:08.398321    6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:04:08.398450    6026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:04:08.399432    6026 out.go:303] Setting JSON to false
	I1201 10:04:08.415347    6026 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2022,"bootTime":1701451826,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:04:08.415435    6026 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:04:08.419144    6026 out.go:177] * [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:04:08.439026    6026 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:04:08.435141    6026 notify.go:220] Checking for updates...
	I1201 10:04:08.448052    6026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:04:08.456006    6026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:04:08.459986    6026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:04:08.462934    6026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:04:08.470018    6026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:04:08.474315    6026 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:04:08.474373    6026 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:04:08.478983    6026 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:04:08.486922    6026 start.go:298] selected driver: qemu2
	I1201 10:04:08.486929    6026 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:04:08.486988    6026 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:04:08.489645    6026 cni.go:84] Creating CNI manager for ""
	I1201 10:04:08.489663    6026 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:04:08.489671    6026 start_flags.go:323] config:
	{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:04:08.494629    6026 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:04:08.502014    6026 out.go:177] * Starting control plane node functional-149000 in cluster functional-149000
	I1201 10:04:08.505052    6026 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:04:08.505078    6026 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:04:08.505087    6026 cache.go:56] Caching tarball of preloaded images
	I1201 10:04:08.505148    6026 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:04:08.505155    6026 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:04:08.505237    6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/functional-149000/config.json ...
	I1201 10:04:08.505559    6026 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:04:08.505587    6026 start.go:369] acquired machines lock for "functional-149000" in 21.125µs
	I1201 10:04:08.505597    6026 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:04:08.505604    6026 fix.go:54] fixHost starting: 
	I1201 10:04:08.505728    6026 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
	W1201 10:04:08.505738    6026 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:04:08.513955    6026 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
	I1201 10:04:08.518032    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
	I1201 10:04:08.520258    6026 main.go:141] libmachine: STDOUT: 
	I1201 10:04:08.520280    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:04:08.520312    6026 fix.go:56] fixHost completed within 14.70825ms
	I1201 10:04:08.520318    6026 start.go:83] releasing machines lock for "functional-149000", held for 14.727375ms
	W1201 10:04:08.520326    6026 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:04:08.520383    6026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:04:08.520388    6026 start.go:709] Will try again in 5 seconds ...
	I1201 10:04:13.522023    6026 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:04:13.522473    6026 start.go:369] acquired machines lock for "functional-149000" in 356.542µs
	I1201 10:04:13.522619    6026 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:04:13.522636    6026 fix.go:54] fixHost starting: 
	I1201 10:04:13.523335    6026 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
	W1201 10:04:13.523366    6026 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:04:13.532966    6026 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
	I1201 10:04:13.539165    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
	I1201 10:04:13.548598    6026 main.go:141] libmachine: STDOUT: 
	I1201 10:04:13.548676    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:04:13.548739    6026 fix.go:56] fixHost completed within 26.103791ms
	I1201 10:04:13.548761    6026 start.go:83] releasing machines lock for "functional-149000", held for 26.26075ms
	W1201 10:04:13.548915    6026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:04:13.557901    6026 out.go:177] 
	W1201 10:04:13.561946    6026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:04:13.561984    6026 out.go:239] * 
	* 
	W1201 10:04:13.564771    6026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:04:13.573864    6026 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-149000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.222955958s for "functional-149000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (68.727625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (27.85375ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-149000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (31.616459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-149000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-149000 get po -A: exit status 1 (25.007625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-149000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-149000\n"*: args "kubectl --context functional-149000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-149000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (32.293541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl images: exit status 89 (43.882166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl images" ssh exit status 89
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 89 (40.816417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-149000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 89
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (40.9015ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (40.1155ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-149000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 89
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 kubectl -- --context functional-149000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 kubectl -- --context functional-149000 get pods: exit status 1 (465.278458ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-149000
	* no server found for cluster "functional-149000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-149000 kubectl -- --context functional-149000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (34.185584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-149000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-149000 get pods: exit status 1 (624.796ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-149000
	* no server found for cluster "functional-149000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-149000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (32.243666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.66s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-149000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.2190865s)

                                                
                                                
-- stdout --
	* [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-149000 in cluster functional-149000
	* Restarting existing qemu2 VM for "functional-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-149000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.219652167s for "functional-149000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (69.839208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-149000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-149000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (27.054875ms)

                                                
                                                
** stderr ** 
	error: context "functional-149000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-149000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (32.191583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 logs: exit status 89 (97.750542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| start   | --download-only -p                                                       | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | binary-mirror-407000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50324                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-407000                                                  | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| addons  | enable dashboard -p                                                      | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | addons-659000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | addons-659000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-659000 --wait=true                                             | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-659000                                                         | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| start   | -p nospam-349000 -n=1 --memory=2250 --wait=false                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-349000                                                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
	| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
	| cache   | functional-149000 cache delete                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	| ssh     | functional-149000 ssh sudo                                               | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-149000                                                        | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-149000 cache reload                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-149000 kubectl --                                             | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | --context functional-149000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/01 10:04:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 10:04:18.357847    6108 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:04:18.357990    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:04:18.357992    6108 out.go:309] Setting ErrFile to fd 2...
	I1201 10:04:18.357994    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:04:18.358119    6108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:04:18.359173    6108 out.go:303] Setting JSON to false
	I1201 10:04:18.375112    6108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2032,"bootTime":1701451826,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:04:18.375197    6108 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:04:18.386712    6108 out.go:177] * [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:04:18.400632    6108 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:04:18.394672    6108 notify.go:220] Checking for updates...
	I1201 10:04:18.407501    6108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:04:18.410620    6108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:04:18.417525    6108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:04:18.424441    6108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:04:18.428496    6108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:04:18.431925    6108 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:04:18.431972    6108 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:04:18.436554    6108 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:04:18.444576    6108 start.go:298] selected driver: qemu2
	I1201 10:04:18.444581    6108 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:04:18.444676    6108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:04:18.447306    6108 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:04:18.447348    6108 cni.go:84] Creating CNI manager for ""
	I1201 10:04:18.447356    6108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:04:18.447360    6108 start_flags.go:323] config:
	{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:04:18.452499    6108 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:04:18.459590    6108 out.go:177] * Starting control plane node functional-149000 in cluster functional-149000
	I1201 10:04:18.462528    6108 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:04:18.462558    6108 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:04:18.462564    6108 cache.go:56] Caching tarball of preloaded images
	I1201 10:04:18.462628    6108 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:04:18.462631    6108 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:04:18.462708    6108 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/functional-149000/config.json ...
	I1201 10:04:18.463179    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:04:18.463213    6108 start.go:369] acquired machines lock for "functional-149000" in 29.584µs
	I1201 10:04:18.463221    6108 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:04:18.463226    6108 fix.go:54] fixHost starting: 
	I1201 10:04:18.463349    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
	W1201 10:04:18.463356    6108 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:04:18.469492    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
	I1201 10:04:18.473619    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
	I1201 10:04:18.476010    6108 main.go:141] libmachine: STDOUT: 
	I1201 10:04:18.476032    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:04:18.476064    6108 fix.go:56] fixHost completed within 12.837ms
	I1201 10:04:18.476067    6108 start.go:83] releasing machines lock for "functional-149000", held for 12.851292ms
	W1201 10:04:18.476084    6108 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:04:18.476115    6108 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:04:18.476120    6108 start.go:709] Will try again in 5 seconds ...
	I1201 10:04:23.478287    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:04:23.478616    6108 start.go:369] acquired machines lock for "functional-149000" in 261.416µs
	I1201 10:04:23.478725    6108 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:04:23.478736    6108 fix.go:54] fixHost starting: 
	I1201 10:04:23.479414    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
	W1201 10:04:23.479433    6108 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:04:23.491031    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
	I1201 10:04:23.499072    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
	I1201 10:04:23.508295    6108 main.go:141] libmachine: STDOUT: 
	I1201 10:04:23.508350    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:04:23.508475    6108 fix.go:56] fixHost completed within 29.699208ms
	I1201 10:04:23.508488    6108 start.go:83] releasing machines lock for "functional-149000", held for 29.857333ms
	W1201 10:04:23.508690    6108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:04:23.516890    6108 out.go:177] 
	W1201 10:04:23.521036    6108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:04:23.521191    6108 out.go:239] * 
	W1201 10:04:23.523957    6108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:04:23.533868    6108 out.go:177] 
	
	* 
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-149000 logs failed: exit status 89
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.1                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | --download-only -p                                                       | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | binary-mirror-407000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50324                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-407000                                                  | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| addons  | enable dashboard -p                                                      | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | addons-659000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | addons-659000                                                            |                      |         |         |                     |                     |
| start   | -p addons-659000 --wait=true                                             | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-659000                                                         | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | -p nospam-349000 -n=1 --memory=2250 --wait=false                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-349000                                                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
| cache   | functional-149000 cache delete                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
| ssh     | functional-149000 ssh sudo                                               | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-149000                                                        | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-149000 cache reload                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-149000 kubectl --                                             | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --context functional-149000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2023/12/01 10:04:18
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.21.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1201 10:04:18.357847    6108 out.go:296] Setting OutFile to fd 1 ...
I1201 10:04:18.357990    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:18.357992    6108 out.go:309] Setting ErrFile to fd 2...
I1201 10:04:18.357994    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:18.358119    6108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:04:18.359173    6108 out.go:303] Setting JSON to false
I1201 10:04:18.375112    6108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2032,"bootTime":1701451826,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1201 10:04:18.375197    6108 start.go:136] gopshost.Virtualization returned error: not implemented yet
I1201 10:04:18.386712    6108 out.go:177] * [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
I1201 10:04:18.400632    6108 out.go:177]   - MINIKUBE_LOCATION=17703
I1201 10:04:18.394672    6108 notify.go:220] Checking for updates...
I1201 10:04:18.407501    6108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
I1201 10:04:18.410620    6108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1201 10:04:18.417525    6108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1201 10:04:18.424441    6108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
I1201 10:04:18.428496    6108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1201 10:04:18.431925    6108 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:04:18.431972    6108 driver.go:392] Setting default libvirt URI to qemu:///system
I1201 10:04:18.436554    6108 out.go:177] * Using the qemu2 driver based on existing profile
I1201 10:04:18.444576    6108 start.go:298] selected driver: qemu2
I1201 10:04:18.444581    6108 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1201 10:04:18.444676    6108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1201 10:04:18.447306    6108 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1201 10:04:18.447348    6108 cni.go:84] Creating CNI manager for ""
I1201 10:04:18.447356    6108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1201 10:04:18.447360    6108 start_flags.go:323] config:
{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1201 10:04:18.452499    6108 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 10:04:18.459590    6108 out.go:177] * Starting control plane node functional-149000 in cluster functional-149000
I1201 10:04:18.462528    6108 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1201 10:04:18.462558    6108 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I1201 10:04:18.462564    6108 cache.go:56] Caching tarball of preloaded images
I1201 10:04:18.462628    6108 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1201 10:04:18.462631    6108 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I1201 10:04:18.462708    6108 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/functional-149000/config.json ...
I1201 10:04:18.463179    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1201 10:04:18.463213    6108 start.go:369] acquired machines lock for "functional-149000" in 29.584µs
I1201 10:04:18.463221    6108 start.go:96] Skipping create...Using existing machine configuration
I1201 10:04:18.463226    6108 fix.go:54] fixHost starting: 
I1201 10:04:18.463349    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
W1201 10:04:18.463356    6108 fix.go:128] unexpected machine state, will restart: <nil>
I1201 10:04:18.469492    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
I1201 10:04:18.473619    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
I1201 10:04:18.476010    6108 main.go:141] libmachine: STDOUT: 
I1201 10:04:18.476032    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1201 10:04:18.476064    6108 fix.go:56] fixHost completed within 12.837ms
I1201 10:04:18.476067    6108 start.go:83] releasing machines lock for "functional-149000", held for 12.851292ms
W1201 10:04:18.476084    6108 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1201 10:04:18.476115    6108 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1201 10:04:18.476120    6108 start.go:709] Will try again in 5 seconds ...
I1201 10:04:23.478287    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1201 10:04:23.478616    6108 start.go:369] acquired machines lock for "functional-149000" in 261.416µs
I1201 10:04:23.478725    6108 start.go:96] Skipping create...Using existing machine configuration
I1201 10:04:23.478736    6108 fix.go:54] fixHost starting: 
I1201 10:04:23.479414    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
W1201 10:04:23.479433    6108 fix.go:128] unexpected machine state, will restart: <nil>
I1201 10:04:23.491031    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
I1201 10:04:23.499072    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
I1201 10:04:23.508295    6108 main.go:141] libmachine: STDOUT: 
I1201 10:04:23.508350    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1201 10:04:23.508475    6108 fix.go:56] fixHost completed within 29.699208ms
I1201 10:04:23.508488    6108 start.go:83] releasing machines lock for "functional-149000", held for 29.857333ms
W1201 10:04:23.508690    6108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1201 10:04:23.516890    6108 out.go:177] 
W1201 10:04:23.521036    6108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1201 10:04:23.521191    6108 out.go:239] * 
W1201 10:04:23.523957    6108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1201 10:04:23.533868    6108 out.go:177] 

                                                
                                                
* 
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1585490306/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | -p download-only-993000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.1                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| delete  | -p download-only-993000                                                  | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | --download-only -p                                                       | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | binary-mirror-407000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50324                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-407000                                                  | binary-mirror-407000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| addons  | enable dashboard -p                                                      | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | addons-659000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | addons-659000                                                            |                      |         |         |                     |                     |
| start   | -p addons-659000 --wait=true                                             | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-659000                                                         | addons-659000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | -p nospam-349000 -n=1 --memory=2250 --wait=false                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-349000 --log_dir                                                  | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-349000                                                         | nospam-349000        | jenkins | v1.32.0 | 01 Dec 23 10:03 PST | 01 Dec 23 10:03 PST |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-149000 cache add                                              | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
| cache   | functional-149000 cache delete                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | minikube-local-cache-test:functional-149000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
| ssh     | functional-149000 ssh sudo                                               | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-149000                                                        | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-149000 cache reload                                           | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
| ssh     | functional-149000 ssh                                                    | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 01 Dec 23 10:04 PST | 01 Dec 23 10:04 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-149000 kubectl --                                             | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --context functional-149000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-149000                                                     | functional-149000    | jenkins | v1.32.0 | 01 Dec 23 10:04 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2023/12/01 10:04:18
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.21.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1201 10:04:18.357847    6108 out.go:296] Setting OutFile to fd 1 ...
I1201 10:04:18.357990    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:18.357992    6108 out.go:309] Setting ErrFile to fd 2...
I1201 10:04:18.357994    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:18.358119    6108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:04:18.359173    6108 out.go:303] Setting JSON to false
I1201 10:04:18.375112    6108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2032,"bootTime":1701451826,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1201 10:04:18.375197    6108 start.go:136] gopshost.Virtualization returned error: not implemented yet
I1201 10:04:18.386712    6108 out.go:177] * [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
I1201 10:04:18.400632    6108 out.go:177]   - MINIKUBE_LOCATION=17703
I1201 10:04:18.394672    6108 notify.go:220] Checking for updates...
I1201 10:04:18.407501    6108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
I1201 10:04:18.410620    6108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1201 10:04:18.417525    6108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1201 10:04:18.424441    6108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
I1201 10:04:18.428496    6108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1201 10:04:18.431925    6108 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:04:18.431972    6108 driver.go:392] Setting default libvirt URI to qemu:///system
I1201 10:04:18.436554    6108 out.go:177] * Using the qemu2 driver based on existing profile
I1201 10:04:18.444576    6108 start.go:298] selected driver: qemu2
I1201 10:04:18.444581    6108 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1201 10:04:18.444676    6108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1201 10:04:18.447306    6108 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1201 10:04:18.447348    6108 cni.go:84] Creating CNI manager for ""
I1201 10:04:18.447356    6108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1201 10:04:18.447360    6108 start_flags.go:323] config:
{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I1201 10:04:18.452499    6108 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 10:04:18.459590    6108 out.go:177] * Starting control plane node functional-149000 in cluster functional-149000
I1201 10:04:18.462528    6108 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I1201 10:04:18.462558    6108 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I1201 10:04:18.462564    6108 cache.go:56] Caching tarball of preloaded images
I1201 10:04:18.462628    6108 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1201 10:04:18.462631    6108 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I1201 10:04:18.462708    6108 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/functional-149000/config.json ...
I1201 10:04:18.463179    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1201 10:04:18.463213    6108 start.go:369] acquired machines lock for "functional-149000" in 29.584µs
I1201 10:04:18.463221    6108 start.go:96] Skipping create...Using existing machine configuration
I1201 10:04:18.463226    6108 fix.go:54] fixHost starting: 
I1201 10:04:18.463349    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
W1201 10:04:18.463356    6108 fix.go:128] unexpected machine state, will restart: <nil>
I1201 10:04:18.469492    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
I1201 10:04:18.473619    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
I1201 10:04:18.476010    6108 main.go:141] libmachine: STDOUT: 
I1201 10:04:18.476032    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1201 10:04:18.476064    6108 fix.go:56] fixHost completed within 12.837ms
I1201 10:04:18.476067    6108 start.go:83] releasing machines lock for "functional-149000", held for 12.851292ms
W1201 10:04:18.476084    6108 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1201 10:04:18.476115    6108 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1201 10:04:18.476120    6108 start.go:709] Will try again in 5 seconds ...
I1201 10:04:23.478287    6108 start.go:365] acquiring machines lock for functional-149000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1201 10:04:23.478616    6108 start.go:369] acquired machines lock for "functional-149000" in 261.416µs
I1201 10:04:23.478725    6108 start.go:96] Skipping create...Using existing machine configuration
I1201 10:04:23.478736    6108 fix.go:54] fixHost starting: 
I1201 10:04:23.479414    6108 fix.go:102] recreateIfNeeded on functional-149000: state=Stopped err=<nil>
W1201 10:04:23.479433    6108 fix.go:128] unexpected machine state, will restart: <nil>
I1201 10:04:23.491031    6108 out.go:177] * Restarting existing qemu2 VM for "functional-149000" ...
I1201 10:04:23.499072    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:63:bf:3d:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/functional-149000/disk.qcow2
I1201 10:04:23.508295    6108 main.go:141] libmachine: STDOUT: 
I1201 10:04:23.508350    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1201 10:04:23.508475    6108 fix.go:56] fixHost completed within 29.699208ms
I1201 10:04:23.508488    6108 start.go:83] releasing machines lock for "functional-149000", held for 29.857333ms
W1201 10:04:23.508690    6108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1201 10:04:23.516890    6108 out.go:177] 
W1201 10:04:23.521036    6108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1201 10:04:23.521191    6108 out.go:239] * 
W1201 10:04:23.523957    6108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1201 10:04:23.533868    6108 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-149000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-149000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.064209ms)

                                                
                                                
** stderr ** 
	error: context "functional-149000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-149000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-149000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-149000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-149000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-149000 --alsologtostderr -v=1] stderr:
I1201 10:05:08.723447    6423 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:08.723825    6423 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:08.723829    6423 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:08.723831    6423 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:08.723949    6423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:08.724192    6423 mustload.go:65] Loading cluster: functional-149000
I1201 10:05:08.724371    6423 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:08.727823    6423 out.go:177] * The control plane node must be running for this command
I1201 10:05:08.735717    6423 out.go:177]   To start a cluster, run: "minikube start -p functional-149000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (43.275208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 status: exit status 7 (31.736292ms)

                                                
                                                
-- stdout --
	functional-149000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-149000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (31.908541ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-149000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 status -o json: exit status 7 (31.119125ms)

                                                
                                                
-- stdout --
	{"Name":"functional-149000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-149000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (31.634834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-149000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1626: (dbg) Non-zero exit: kubectl --context functional-149000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (24.922583ms)

                                                
                                                
** stderr ** 
	error: context "functional-149000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1632: failed to create hello-node deployment with this command "kubectl --context functional-149000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-149000 describe po hello-node-connect
functional_test.go:1601: (dbg) Non-zero exit: kubectl --context functional-149000 describe po hello-node-connect: exit status 1 (24.496084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:1603: "kubectl --context functional-149000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1605: hello-node pod describe:
functional_test.go:1607: (dbg) Run:  kubectl --context functional-149000 logs -l app=hello-node-connect
functional_test.go:1607: (dbg) Non-zero exit: kubectl --context functional-149000 logs -l app=hello-node-connect: exit status 1 (24.777334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:1609: "kubectl --context functional-149000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1611: hello-node logs:
functional_test.go:1613: (dbg) Run:  kubectl --context functional-149000 describe svc hello-node-connect
functional_test.go:1613: (dbg) Non-zero exit: kubectl --context functional-149000 describe svc hello-node-connect: exit status 1 (24.801667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:1615: "kubectl --context functional-149000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1617: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (31.751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-149000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (31.516875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "echo hello"
functional_test.go:1724: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "echo hello": exit status 89 (51.715667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1729: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"echo hello\"" : exit status 89
functional_test.go:1733: expected minikube ssh command output to be -"hello"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n"*. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"echo hello\""
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "cat /etc/hostname": exit status 89 (54.827792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1747: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"cat /etc/hostname\"" : exit status 89
functional_test.go:1751: expected minikube ssh command output to be -"functional-149000"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n"*. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (34.475209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 89 (58.298708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-149000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 "sudo cat /home/docker/cp-test.txt": exit status 89 (48.860333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-149000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cp functional-149000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1137253342/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 cp functional-149000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1137253342/001/cp-test.txt: exit status 89 (53.547875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-149000 cp functional-149000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1137253342/001/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 "sudo cat /home/docker/cp-test.txt": exit status 89 (47.8935ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-149000 ssh -n functional-149000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1137253342/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n",
+ 	"",
  )
--- FAIL: TestFunctional/parallel/CpCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5825/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/test/nested/copy/5825/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/test/nested/copy/5825/hosts": exit status 89 (42.241583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/test/nested/copy/5825/hosts" failed: exit status 89
functional_test.go:1932: file sync test content: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-149000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (32.888417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5825.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/5825.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/5825.pem": exit status 89 (43.724583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/5825.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /etc/ssl/certs/5825.pem\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5825.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5825.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /usr/share/ca-certificates/5825.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /usr/share/ca-certificates/5825.pem": exit status 89 (47.782292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/5825.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /usr/share/ca-certificates/5825.pem\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5825.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 89 (42.662958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/58252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/58252.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/58252.pem": exit status 89 (50.887125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/58252.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /etc/ssl/certs/58252.pem\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/58252.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/58252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /usr/share/ca-certificates/58252.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /usr/share/ca-certificates/58252.pem": exit status 89 (43.552125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/58252.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /usr/share/ca-certificates/58252.pem\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/58252.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 89 (43.690417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-149000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (33.315708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-149000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-149000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.543292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-149000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-149000 -n functional-149000: exit status 7 (32.635583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo systemctl is-active crio": exit status 89 (46.128042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --: exit status 89
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 version -o=json --components: exit status 89 (48.040875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 89
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-149000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-149000 image ls --format short --alsologtostderr:
I1201 10:05:09.162378    6438 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:09.162524    6438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.162528    6438 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:09.162530    6438 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.162656    6438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:09.163054    6438 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.163117    6438 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-149000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-149000 image ls --format table --alsologtostderr:
I1201 10:05:09.398146    6450 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:09.398295    6450 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.398298    6450 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:09.398301    6450 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.398437    6450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:09.398826    6450 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.398890    6450 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-149000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-149000 image ls --format json --alsologtostderr:
I1201 10:05:09.360074    6448 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:09.360239    6448 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.360242    6448 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:09.360245    6448 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.360370    6448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:09.360818    6448 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.360884    6448 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-149000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-149000 image ls --format yaml --alsologtostderr:
I1201 10:05:09.201150    6440 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:09.201297    6440 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.201300    6440 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:09.201303    6440 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.201429    6440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:09.201831    6440 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.201897    6440 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh pgrep buildkitd: exit status 89 (44.843125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image build -t localhost/my-image:functional-149000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-149000 image build -t localhost/my-image:functional-149000 testdata/build --alsologtostderr:
I1201 10:05:09.284010    6444 out.go:296] Setting OutFile to fd 1 ...
I1201 10:05:09.284379    6444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.284382    6444 out.go:309] Setting ErrFile to fd 2...
I1201 10:05:09.284385    6444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:05:09.284512    6444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:05:09.284871    6444 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.285275    6444 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:05:09.285511    6444 build_images.go:123] succeeded building to: 
I1201 10:05:09.285514    6444 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
functional_test.go:442: expected "localhost/my-image:functional-149000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-149000 docker-env) && out/minikube-darwin-arm64 status -p functional-149000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-149000 docker-env) && out/minikube-darwin-arm64 status -p functional-149000": exit status 1 (47.971375ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2: exit status 89 (44.626334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:05:09.018932    6432 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:05:09.019408    6432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.019411    6432 out.go:309] Setting ErrFile to fd 2...
	I1201 10:05:09.019414    6432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.019530    6432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:05:09.019727    6432 mustload.go:65] Loading cluster: functional-149000
	I1201 10:05:09.019925    6432 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:05:09.023830    6432 out.go:177] * The control plane node must be running for this command
	I1201 10:05:09.029784    6432 out.go:177]   To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2: exit status 89 (48.745375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:05:09.112985    6436 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:05:09.113114    6436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.113118    6436 out.go:309] Setting ErrFile to fd 2...
	I1201 10:05:09.113120    6436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.113240    6436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:05:09.113467    6436 mustload.go:65] Loading cluster: functional-149000
	I1201 10:05:09.113676    6436 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:05:09.118849    6436 out.go:177] * The control plane node must be running for this command
	I1201 10:05:09.125751    6436 out.go:177]   To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2: exit status 89 (48.535875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:05:09.063885    6434 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:05:09.064053    6434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.064057    6434 out.go:309] Setting ErrFile to fd 2...
	I1201 10:05:09.064060    6434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:09.064188    6434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:05:09.064392    6434 mustload.go:65] Loading cluster: functional-149000
	I1201 10:05:09.064590    6434 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:05:09.068799    6434 out.go:177] * The control plane node must be running for this command
	I1201 10:05:09.076805    6434 out.go:177]   To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-149000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-149000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1436: (dbg) Non-zero exit: kubectl --context functional-149000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.591584ms)

                                                
                                                
** stderr ** 
	error: context "functional-149000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1442: failed to create hello-node deployment with this command "kubectl --context functional-149000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 service list
functional_test.go:1458: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 service list: exit status 89 (43.71075ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1460: failed to do service list. args "out/minikube-darwin-arm64 -p functional-149000 service list" : exit status 89
functional_test.go:1463: expected 'service list' to contain *hello-node* but got -"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 service list -o json
functional_test.go:1488: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 service list -o json: exit status 89 (43.781666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1490: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-149000 service list -o json": exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 service --namespace=default --https --url hello-node: exit status 89 (47.79175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1510: failed to get service url. args "out/minikube-darwin-arm64 -p functional-149000 service --namespace=default --https --url hello-node" : exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 service hello-node --url --format={{.IP}}: exit status 89 (48.865666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-149000 service hello-node --url --format={{.IP}}": exit status 89
functional_test.go:1547: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 service hello-node --url: exit status 89 (48.787125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test.go:1560: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-149000 service hello-node --url": exit status 89
functional_test.go:1564: found endpoint for hello-node: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test.go:1568: failed to parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"": parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-149000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 89. stderr: I1201 10:04:25.398356    6215 out.go:296] Setting OutFile to fd 1 ...
I1201 10:04:25.398534    6215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:25.398538    6215 out.go:309] Setting ErrFile to fd 2...
I1201 10:04:25.398541    6215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:04:25.398683    6215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:04:25.398914    6215 mustload.go:65] Loading cluster: functional-149000
I1201 10:04:25.399131    6215 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:04:25.402431    6215 out.go:177] * The control plane node must be running for this command
I1201 10:04:25.414407    6215 out.go:177]   To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
stdout: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-149000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6216: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-149000": client config: context "functional-149000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-149000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-149000 get svc nginx-svc: exit status 1 (69.233792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-149000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-149000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr: (1.324251792s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-149000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr: (1.334623208s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-149000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.328064667s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-149000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-149000 image load --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr: (1.204747625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-149000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image save gcr.io/google-containers/addon-resizer:functional-149000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-149000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.033722875s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.67s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-002000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-002000 --driver=qemu2 : exit status 80 (9.796047208s)

                                                
                                                
-- stdout --
	* [image-002000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node image-002000 in cluster image-002000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-002000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-002000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-002000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-002000 -n image-002000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-002000 -n image-002000: exit status 7 (72.264667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-002000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.87s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (17.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-831000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ingress-addon-legacy-831000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (17.007106791s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-831000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node ingress-addon-legacy-831000 in cluster ingress-addon-legacy-831000
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ingress-addon-legacy-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:07:11.410249    6527 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:07:11.410395    6527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:11.410398    6527 out.go:309] Setting ErrFile to fd 2...
	I1201 10:07:11.410401    6527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:11.410528    6527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:07:11.411573    6527 out.go:303] Setting JSON to false
	I1201 10:07:11.427575    6527 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2205,"bootTime":1701451826,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:07:11.427669    6527 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:07:11.433888    6527 out.go:177] * [ingress-addon-legacy-831000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:07:11.444245    6527 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:07:11.441957    6527 notify.go:220] Checking for updates...
	I1201 10:07:11.450847    6527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:07:11.457895    6527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:07:11.465692    6527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:07:11.473868    6527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:07:11.480810    6527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:07:11.484993    6527 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:07:11.493855    6527 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:07:11.496831    6527 start.go:298] selected driver: qemu2
	I1201 10:07:11.496836    6527 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:07:11.496841    6527 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:07:11.499394    6527 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:07:11.503811    6527 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:07:11.506995    6527 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:07:11.507054    6527 cni.go:84] Creating CNI manager for ""
	I1201 10:07:11.507063    6527 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:07:11.507068    6527 start_flags.go:323] config:
	{Name:ingress-addon-legacy-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:07:11.512100    6527 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:07:11.518893    6527 out.go:177] * Starting control plane node ingress-addon-legacy-831000 in cluster ingress-addon-legacy-831000
	I1201 10:07:11.521909    6527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1201 10:07:11.576522    6527 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1201 10:07:11.576551    6527 cache.go:56] Caching tarball of preloaded images
	I1201 10:07:11.576787    6527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1201 10:07:11.581928    6527 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1201 10:07:11.587892    6527 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:07:11.671118    6527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1201 10:07:17.606228    6527 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:07:17.606389    6527 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:07:18.354113    6527 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1201 10:07:18.354303    6527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/ingress-addon-legacy-831000/config.json ...
	I1201 10:07:18.354319    6527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/ingress-addon-legacy-831000/config.json: {Name:mk4b52925555fbebc6ea0ee300ac294695c24935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:07:18.354536    6527 start.go:365] acquiring machines lock for ingress-addon-legacy-831000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:07:18.354565    6527 start.go:369] acquired machines lock for "ingress-addon-legacy-831000" in 22.5µs
	I1201 10:07:18.354575    6527 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:07:18.354615    6527 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:07:18.373674    6527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1201 10:07:18.389471    6527 start.go:159] libmachine.API.Create for "ingress-addon-legacy-831000" (driver="qemu2")
	I1201 10:07:18.389493    6527 client.go:168] LocalClient.Create starting
	I1201 10:07:18.389571    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:07:18.389598    6527 main.go:141] libmachine: Decoding PEM data...
	I1201 10:07:18.389607    6527 main.go:141] libmachine: Parsing certificate...
	I1201 10:07:18.389642    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:07:18.389667    6527 main.go:141] libmachine: Decoding PEM data...
	I1201 10:07:18.389673    6527 main.go:141] libmachine: Parsing certificate...
	I1201 10:07:18.390014    6527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:07:18.616825    6527 main.go:141] libmachine: Creating SSH key...
	I1201 10:07:18.916626    6527 main.go:141] libmachine: Creating Disk image...
	I1201 10:07:18.916637    6527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:07:18.916867    6527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:18.929869    6527 main.go:141] libmachine: STDOUT: 
	I1201 10:07:18.929892    6527 main.go:141] libmachine: STDERR: 
	I1201 10:07:18.929949    6527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2 +20000M
	I1201 10:07:18.940675    6527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:07:18.940701    6527 main.go:141] libmachine: STDERR: 
	I1201 10:07:18.940720    6527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:18.940724    6527 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:07:18.940763    6527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:97:a0:5c:fd:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:18.942512    6527 main.go:141] libmachine: STDOUT: 
	I1201 10:07:18.942527    6527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:07:18.942546    6527 client.go:171] LocalClient.Create took 553.059708ms
	I1201 10:07:20.944673    6527 start.go:128] duration metric: createHost completed in 2.590101791s
	I1201 10:07:20.944737    6527 start.go:83] releasing machines lock for "ingress-addon-legacy-831000", held for 2.590225375s
	W1201 10:07:20.944812    6527 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:07:20.961170    6527 out.go:177] * Deleting "ingress-addon-legacy-831000" in qemu2 ...
	W1201 10:07:20.998623    6527 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:07:20.998665    6527 start.go:709] Will try again in 5 seconds ...
	I1201 10:07:26.000796    6527 start.go:365] acquiring machines lock for ingress-addon-legacy-831000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:07:26.001269    6527 start.go:369] acquired machines lock for "ingress-addon-legacy-831000" in 362.209µs
	I1201 10:07:26.001360    6527 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:07:26.001613    6527 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:07:26.021356    6527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1201 10:07:26.070045    6527 start.go:159] libmachine.API.Create for "ingress-addon-legacy-831000" (driver="qemu2")
	I1201 10:07:26.070091    6527 client.go:168] LocalClient.Create starting
	I1201 10:07:26.070221    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:07:26.070282    6527 main.go:141] libmachine: Decoding PEM data...
	I1201 10:07:26.070304    6527 main.go:141] libmachine: Parsing certificate...
	I1201 10:07:26.070376    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:07:26.070425    6527 main.go:141] libmachine: Decoding PEM data...
	I1201 10:07:26.070441    6527 main.go:141] libmachine: Parsing certificate...
	I1201 10:07:26.070945    6527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:07:26.215157    6527 main.go:141] libmachine: Creating SSH key...
	I1201 10:07:26.285598    6527 main.go:141] libmachine: Creating Disk image...
	I1201 10:07:26.285603    6527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:07:26.285772    6527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:26.297897    6527 main.go:141] libmachine: STDOUT: 
	I1201 10:07:26.297919    6527 main.go:141] libmachine: STDERR: 
	I1201 10:07:26.297984    6527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2 +20000M
	I1201 10:07:26.308533    6527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:07:26.308551    6527 main.go:141] libmachine: STDERR: 
	I1201 10:07:26.308567    6527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:26.308584    6527 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:07:26.308633    6527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:6c:d6:76:fa:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/ingress-addon-legacy-831000/disk.qcow2
	I1201 10:07:26.310448    6527 main.go:141] libmachine: STDOUT: 
	I1201 10:07:26.310466    6527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:07:26.310478    6527 client.go:171] LocalClient.Create took 240.387792ms
	I1201 10:07:28.312696    6527 start.go:128] duration metric: createHost completed in 2.311099s
	I1201 10:07:28.312799    6527 start.go:83] releasing machines lock for "ingress-addon-legacy-831000", held for 2.311557083s
	W1201 10:07:28.313177    6527 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:07:28.330831    6527 out.go:177] 
	W1201 10:07:28.338839    6527 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:07:28.338899    6527 out.go:239] * 
	* 
	W1201 10:07:28.341816    6527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:07:28.348685    6527 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-arm64 start -p ingress-addon-legacy-831000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (17.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-831000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ingress-addon-legacy-831000 addons enable ingress --alsologtostderr -v=5: exit status 10 (89.248625ms)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:07:28.441480    6543 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:07:28.442530    6543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:28.442534    6543 out.go:309] Setting ErrFile to fd 2...
	I1201 10:07:28.442537    6543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:28.442677    6543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:07:28.442963    6543 mustload.go:65] Loading cluster: ingress-addon-legacy-831000
	I1201 10:07:28.443206    6543 config.go:182] Loaded profile config "ingress-addon-legacy-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1201 10:07:28.443237    6543 addons.go:594] checking whether the cluster is paused
	I1201 10:07:28.443302    6543 config.go:182] Loaded profile config "ingress-addon-legacy-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1201 10:07:28.443311    6543 host.go:66] Checking if "ingress-addon-legacy-831000" exists ...
	I1201 10:07:28.447683    6543 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1201 10:07:28.450784    6543 config.go:182] Loaded profile config "ingress-addon-legacy-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1201 10:07:28.450789    6543 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-831000"
	I1201 10:07:28.450797    6543 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-831000"
	I1201 10:07:28.450809    6543 host.go:66] Checking if "ingress-addon-legacy-831000" exists ...
	W1201 10:07:28.451026    6543 host.go:58] "ingress-addon-legacy-831000" host status: Stopped
	W1201 10:07:28.451031    6543 addons.go:277] "ingress-addon-legacy-831000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I1201 10:07:28.451037    6543 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-831000"
	I1201 10:07:28.454616    6543 out.go:177] * Verifying ingress addon...
	I1201 10:07:28.458709    6543 loader.go:141] Config not found: /Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:07:28.462709    6543 out.go:177] 
	W1201 10:07:28.466658    6543 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-831000" does not exist: client config: context "ingress-addon-legacy-831000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-831000" does not exist: client config: context "ingress-addon-legacy-831000" does not exist]
	W1201 10:07:28.466665    6543 out.go:239] * 
	* 
	W1201 10:07:28.469061    6543 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:07:28.472733    6543 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-831000 -n ingress-addon-legacy-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-831000 -n ingress-addon-legacy-831000: exit status 7 (36.555958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-831000 -n ingress-addon-legacy-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-831000 -n ingress-addon-legacy-831000: exit status 7 (31.887792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-730000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-730000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.91817425s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c5e9562c-5bfa-4714-b4d0-9851cdf44339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-730000] minikube v1.32.0 on Darwin 14.1.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6cc8c2d5-c1b7-4110-a0ff-70d79a6336c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17703"}}
	{"specversion":"1.0","id":"447c66d8-2d93-4fb7-ac9f-840cd3305681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig"}}
	{"specversion":"1.0","id":"aa8f0cb9-ebcb-4363-9678-44cf5842cd68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"80215b6b-bf03-465e-8157-a6ba29ca92ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0e8ab9bb-7dbd-4e04-b62b-db0111e212ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube"}}
	{"specversion":"1.0","id":"eed2a530-177d-4656-a25c-8db0ce25f51c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f32be8f7-efa2-4412-9006-6c82500fc036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7bea1fb-b59f-4c49-9bf7-dc27e5d6ee00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"746caf03-c5ae-4f4c-82d2-e2fd0b01776d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-730000 in cluster json-output-730000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"876d7af5-0dae-48f4-817d-a71c3cf05457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fd2f266d-efe6-4c2d-8371-61240bc6c850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-730000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb1cc21f-3811-4518-b270-aad5692bc68f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c1d071cf-ad32-43a7-b573-aceb9923d026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9aa4d1e0-4a95-42fc-b3ad-3ccaa103e30f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-730000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b5066f40-245c-4afa-832b-90c1589d6dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"90a4d937-0832-418a-b952-9ddfd790be46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-730000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.92s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-730000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-730000 --output=json --user=testUser: exit status 89 (86.075792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd3f6823-1e9a-403e-b433-784dab949c53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control plane node must be running for this command"}}
	{"specversion":"1.0","id":"23d09f2d-2ae0-4acb-bb9b-04cacd441f5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-730000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-730000 --output=json --user=testUser": exit status 89
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-730000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-730000 --output=json --user=testUser: exit status 89 (52.0375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p json-output-730000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-730000 --output=json --user=testUser": exit status 89
json_output_test.go:213: unable to marshal output: * The control plane node must be running for this command
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 : exit status 80 (9.861142958s)

                                                
                                                
-- stdout --
	* [first-406000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-406000 in cluster first-406000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 ": exit status 80
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-01 10:07:49.14287 -0800 PST m=+283.178931668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-407000 -n second-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-407000 -n second-407000: exit status 85 (85.908292ms)

                                                
                                                
-- stdout --
	* Profile "second-407000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-407000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-407000" host is not running, skipping log retrieval (state="* Profile \"second-407000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-407000\"")
helpers_test.go:175: Cleaning up "second-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-407000
panic.go:523: *** TestMinikubeProfile FAILED at 2023-12-01 10:07:49.459637 -0800 PST m=+283.495706210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-406000 -n first-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-406000 -n first-406000: exit status 7 (31.848292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-406000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-406000
--- FAIL: TestMinikubeProfile (10.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-580000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-580000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.883423375s)

                                                
                                                
-- stdout --
	* [mount-start-1-580000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-580000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-580000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-580000 -n mount-start-1-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-580000 -n mount-start-1-580000: exit status 7 (73.206958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-580000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-486000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-486000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.972097042s)

                                                
                                                
-- stdout --
	* [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-486000 in cluster multinode-486000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:07:59.912840    6679 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:07:59.912987    6679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:59.912990    6679 out.go:309] Setting ErrFile to fd 2...
	I1201 10:07:59.912993    6679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:07:59.913124    6679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:07:59.914267    6679 out.go:303] Setting JSON to false
	I1201 10:07:59.930407    6679 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2253,"bootTime":1701451826,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:07:59.930496    6679 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:07:59.937496    6679 out.go:177] * [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:07:59.949441    6679 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:07:59.945533    6679 notify.go:220] Checking for updates...
	I1201 10:07:59.956436    6679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:07:59.963484    6679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:07:59.967455    6679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:07:59.970466    6679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:07:59.978421    6679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:07:59.982623    6679 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:07:59.987443    6679 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:07:59.995302    6679 start.go:298] selected driver: qemu2
	I1201 10:07:59.995310    6679 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:07:59.995317    6679 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:07:59.998053    6679 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:08:00.001427    6679 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:08:00.005493    6679 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:08:00.005528    6679 cni.go:84] Creating CNI manager for ""
	I1201 10:08:00.005533    6679 cni.go:136] 0 nodes found, recommending kindnet
	I1201 10:08:00.005538    6679 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 10:08:00.005548    6679 start_flags.go:323] config:
	{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:08:00.010383    6679 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:08:00.018465    6679 out.go:177] * Starting control plane node multinode-486000 in cluster multinode-486000
	I1201 10:08:00.022385    6679 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:08:00.022414    6679 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:08:00.022424    6679 cache.go:56] Caching tarball of preloaded images
	I1201 10:08:00.022484    6679 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:08:00.022490    6679 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:08:00.022755    6679 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/multinode-486000/config.json ...
	I1201 10:08:00.022767    6679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/multinode-486000/config.json: {Name:mkd70a9d185058f11f5f51661fc1269278818b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:08:00.022975    6679 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:08:00.023010    6679 start.go:369] acquired machines lock for "multinode-486000" in 29.041µs
	I1201 10:08:00.023023    6679 start.go:93] Provisioning new machine with config: &{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:08:00.023053    6679 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:08:00.029370    6679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:08:00.047503    6679 start.go:159] libmachine.API.Create for "multinode-486000" (driver="qemu2")
	I1201 10:08:00.047531    6679 client.go:168] LocalClient.Create starting
	I1201 10:08:00.047610    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:08:00.047643    6679 main.go:141] libmachine: Decoding PEM data...
	I1201 10:08:00.047655    6679 main.go:141] libmachine: Parsing certificate...
	I1201 10:08:00.047694    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:08:00.047718    6679 main.go:141] libmachine: Decoding PEM data...
	I1201 10:08:00.047727    6679 main.go:141] libmachine: Parsing certificate...
	I1201 10:08:00.048098    6679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:08:00.179724    6679 main.go:141] libmachine: Creating SSH key...
	I1201 10:08:00.399152    6679 main.go:141] libmachine: Creating Disk image...
	I1201 10:08:00.399164    6679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:08:00.399345    6679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:00.411569    6679 main.go:141] libmachine: STDOUT: 
	I1201 10:08:00.411594    6679 main.go:141] libmachine: STDERR: 
	I1201 10:08:00.411647    6679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2 +20000M
	I1201 10:08:00.422164    6679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:08:00.422176    6679 main.go:141] libmachine: STDERR: 
	I1201 10:08:00.422199    6679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:00.422206    6679 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:08:00.422245    6679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:57:31:a7:e4:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:00.423866    6679 main.go:141] libmachine: STDOUT: 
	I1201 10:08:00.423883    6679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:08:00.423902    6679 client.go:171] LocalClient.Create took 376.373916ms
	I1201 10:08:02.426038    6679 start.go:128] duration metric: createHost completed in 2.4030215s
	I1201 10:08:02.426094    6679 start.go:83] releasing machines lock for "multinode-486000", held for 2.403131583s
	W1201 10:08:02.426172    6679 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:08:02.444372    6679 out.go:177] * Deleting "multinode-486000" in qemu2 ...
	W1201 10:08:02.472060    6679 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:08:02.472097    6679 start.go:709] Will try again in 5 seconds ...
	I1201 10:08:07.474156    6679 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:08:07.474547    6679 start.go:369] acquired machines lock for "multinode-486000" in 279.5µs
	I1201 10:08:07.474636    6679 start.go:93] Provisioning new machine with config: &{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:08:07.474886    6679 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:08:07.497671    6679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:08:07.547190    6679 start.go:159] libmachine.API.Create for "multinode-486000" (driver="qemu2")
	I1201 10:08:07.547228    6679 client.go:168] LocalClient.Create starting
	I1201 10:08:07.547365    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:08:07.547422    6679 main.go:141] libmachine: Decoding PEM data...
	I1201 10:08:07.547449    6679 main.go:141] libmachine: Parsing certificate...
	I1201 10:08:07.547510    6679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:08:07.547555    6679 main.go:141] libmachine: Decoding PEM data...
	I1201 10:08:07.547570    6679 main.go:141] libmachine: Parsing certificate...
	I1201 10:08:07.548104    6679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:08:07.691131    6679 main.go:141] libmachine: Creating SSH key...
	I1201 10:08:07.773733    6679 main.go:141] libmachine: Creating Disk image...
	I1201 10:08:07.773738    6679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:08:07.773924    6679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:07.786049    6679 main.go:141] libmachine: STDOUT: 
	I1201 10:08:07.786066    6679 main.go:141] libmachine: STDERR: 
	I1201 10:08:07.786124    6679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2 +20000M
	I1201 10:08:07.796464    6679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:08:07.796479    6679 main.go:141] libmachine: STDERR: 
	I1201 10:08:07.796499    6679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:07.796506    6679 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:08:07.796545    6679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:ec:0e:0a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:08:07.798149    6679 main.go:141] libmachine: STDOUT: 
	I1201 10:08:07.798163    6679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:08:07.798175    6679 client.go:171] LocalClient.Create took 250.949541ms
	I1201 10:08:09.800262    6679 start.go:128] duration metric: createHost completed in 2.325410958s
	I1201 10:08:09.800299    6679 start.go:83] releasing machines lock for "multinode-486000", held for 2.325786792s
	W1201 10:08:09.800731    6679 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:08:09.819271    6679 out.go:177] 
	W1201 10:08:09.824343    6679 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:08:09.824394    6679 out.go:239] * 
	* 
	W1201 10:08:09.826984    6679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:08:09.842117    6679 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-486000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (71.674709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.693541ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-486000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- rollout status deployment/busybox: exit status 1 (59.347916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.899833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.629ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.699792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.062584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.590959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.79675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.137458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.835583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.738375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.501541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.505417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.212792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.524875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.716167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.401708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.565125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-486000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.250916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.701209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-486000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-486000 -v 3 --alsologtostderr: exit status 89 (78.872709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-486000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:05.162199    6788 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:05.162373    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.162376    6788 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:05.162378    6788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.162492    6788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:05.162721    6788 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:05.162893    6788 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:05.182077    6788 out.go:177] * The control plane node must be running for this command
	I1201 10:10:05.195967    6788 out.go:177]   To start a cluster, run: "minikube start -p multinode-486000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-486000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (33.577541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-486000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-486000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (24.940125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-486000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-486000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-486000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (32.060125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:156: expected profile "multinode-486000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-486000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-486000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-486000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.960959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status --output json --alsologtostderr: exit status 7 (31.344333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-486000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:05.436549    6801 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:05.436693    6801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.436697    6801 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:05.436699    6801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.436827    6801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:05.436943    6801 out.go:303] Setting JSON to true
	I1201 10:10:05.436957    6801 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:05.437011    6801 notify.go:220] Checking for updates...
	I1201 10:10:05.437143    6801 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:05.437148    6801 status.go:255] checking status of multinode-486000 ...
	I1201 10:10:05.437342    6801 status.go:330] multinode-486000 host status = "Stopped" (err=<nil>)
	I1201 10:10:05.437345    6801 status.go:343] host is not running, skipping remaining checks
	I1201 10:10:05.437347    6801 status.go:257] multinode-486000 status: &{Name:multinode-486000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-486000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.858917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 node stop m03: exit status 85 (49.916416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-486000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status: exit status 7 (32.435791ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr: exit status 7 (31.556959ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:05.583108    6809 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:05.583261    6809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.583264    6809 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:05.583267    6809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.583392    6809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:05.583498    6809 out.go:303] Setting JSON to false
	I1201 10:10:05.583512    6809 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:05.583568    6809 notify.go:220] Checking for updates...
	I1201 10:10:05.583687    6809 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:05.583692    6809 status.go:255] checking status of multinode-486000 ...
	I1201 10:10:05.583897    6809 status.go:330] multinode-486000 host status = "Stopped" (err=<nil>)
	I1201 10:10:05.583901    6809 status.go:343] host is not running, skipping remaining checks
	I1201 10:10:05.583904    6809 status.go:257] multinode-486000 status: &{Name:multinode-486000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr": multinode-486000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.931292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 node start m03 --alsologtostderr: exit status 85 (51.38925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:05.647276    6813 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:05.647427    6813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.647431    6813 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:05.647433    6813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.647550    6813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:05.647784    6813 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:05.647971    6813 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:05.652457    6813 out.go:177] 
	W1201 10:10:05.656654    6813 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1201 10:10:05.656658    6813 out.go:239] * 
	* 
	W1201 10:10:05.658433    6813 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:10:05.662471    6813 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1201 10:10:05.647276    6813 out.go:296] Setting OutFile to fd 1 ...
I1201 10:10:05.647427    6813 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:10:05.647431    6813 out.go:309] Setting ErrFile to fd 2...
I1201 10:10:05.647433    6813 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1201 10:10:05.647550    6813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
I1201 10:10:05.647784    6813 mustload.go:65] Loading cluster: multinode-486000
I1201 10:10:05.647971    6813 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1201 10:10:05.652457    6813 out.go:177] 
W1201 10:10:05.656654    6813 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1201 10:10:05.656658    6813 out.go:239] * 
* 
W1201 10:10:05.658433    6813 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1201 10:10:05.662471    6813 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-486000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status: exit status 7 (31.977417ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-486000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (31.48275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-486000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-486000
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-486000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-486000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.205424959s)

                                                
                                                
-- stdout --
	* [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-486000 in cluster multinode-486000
	* Restarting existing qemu2 VM for "multinode-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:05.856623    6823 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:05.856758    6823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.856761    6823 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:05.856763    6823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:05.856886    6823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:05.857912    6823 out.go:303] Setting JSON to false
	I1201 10:10:05.873875    6823 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2379,"bootTime":1701451826,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:10:05.873968    6823 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:10:05.878673    6823 out.go:177] * [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:10:05.890597    6823 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:10:05.885674    6823 notify.go:220] Checking for updates...
	I1201 10:10:05.901639    6823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:10:05.909594    6823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:10:05.913578    6823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:10:05.916583    6823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:10:05.919542    6823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:10:05.923065    6823 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:05.923125    6823 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:10:05.927606    6823 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:10:05.934529    6823 start.go:298] selected driver: qemu2
	I1201 10:10:05.934535    6823 start.go:902] validating driver "qemu2" against &{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:10:05.934589    6823 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:10:05.937078    6823 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:10:05.937139    6823 cni.go:84] Creating CNI manager for ""
	I1201 10:10:05.937143    6823 cni.go:136] 1 nodes found, recommending kindnet
	I1201 10:10:05.937148    6823 start_flags.go:323] config:
	{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:10:05.941752    6823 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:05.949608    6823 out.go:177] * Starting control plane node multinode-486000 in cluster multinode-486000
	I1201 10:10:05.952619    6823 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:10:05.952646    6823 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:10:05.952654    6823 cache.go:56] Caching tarball of preloaded images
	I1201 10:10:05.952707    6823 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:10:05.952712    6823 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:10:05.952772    6823 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/multinode-486000/config.json ...
	I1201 10:10:05.953278    6823 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:05.953311    6823 start.go:369] acquired machines lock for "multinode-486000" in 26.667µs
	I1201 10:10:05.953320    6823 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:10:05.953326    6823 fix.go:54] fixHost starting: 
	I1201 10:10:05.953442    6823 fix.go:102] recreateIfNeeded on multinode-486000: state=Stopped err=<nil>
	W1201 10:10:05.953451    6823 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:10:05.957528    6823 out.go:177] * Restarting existing qemu2 VM for "multinode-486000" ...
	I1201 10:10:05.961485    6823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:ec:0e:0a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:10:05.963621    6823 main.go:141] libmachine: STDOUT: 
	I1201 10:10:05.963640    6823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:05.963669    6823 fix.go:56] fixHost completed within 10.341459ms
	I1201 10:10:05.963673    6823 start.go:83] releasing machines lock for "multinode-486000", held for 10.357541ms
	W1201 10:10:05.963681    6823 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:10:05.963724    6823 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:05.963729    6823 start.go:709] Will try again in 5 seconds ...
	I1201 10:10:10.965813    6823 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:10.966153    6823 start.go:369] acquired machines lock for "multinode-486000" in 225µs
	I1201 10:10:10.966254    6823 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:10:10.966274    6823 fix.go:54] fixHost starting: 
	I1201 10:10:10.966940    6823 fix.go:102] recreateIfNeeded on multinode-486000: state=Stopped err=<nil>
	W1201 10:10:10.966967    6823 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:10:10.980234    6823 out.go:177] * Restarting existing qemu2 VM for "multinode-486000" ...
	I1201 10:10:10.985553    6823 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:ec:0e:0a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:10:10.994657    6823 main.go:141] libmachine: STDOUT: 
	I1201 10:10:10.994736    6823 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:10.994851    6823 fix.go:56] fixHost completed within 28.581459ms
	I1201 10:10:10.994866    6823 start.go:83] releasing machines lock for "multinode-486000", held for 28.691375ms
	W1201 10:10:10.995064    6823 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:11.001435    6823 out.go:177] 
	W1201 10:10:11.005564    6823 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:10:11.005625    6823 out.go:239] * 
	* 
	W1201 10:10:11.008424    6823 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:10:11.019440    6823 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-486000" : exit status 80
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-486000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (36.290625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 node delete m03: exit status 89 (43.586625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-486000"

                                                
                                                
-- /stdout --
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-486000 node delete m03": exit status 89
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr: exit status 7 (33.212667ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:11.216473    6844 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:11.216633    6844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.216636    6844 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:11.216639    6844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.216769    6844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:11.216896    6844 out.go:303] Setting JSON to false
	I1201 10:10:11.216909    6844 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:11.216948    6844 notify.go:220] Checking for updates...
	I1201 10:10:11.217115    6844 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:11.217120    6844 status.go:255] checking status of multinode-486000 ...
	I1201 10:10:11.217341    6844 status.go:330] multinode-486000 host status = "Stopped" (err=<nil>)
	I1201 10:10:11.217345    6844 status.go:343] host is not running, skipping remaining checks
	I1201 10:10:11.217347    6844 status.go:257] multinode-486000 status: &{Name:multinode-486000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (33.500042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 stop
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status: exit status 7 (33.339333ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr: exit status 7 (33.088333ms)

                                                
                                                
-- stdout --
	multinode-486000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:11.382053    6852 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:11.382221    6852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.382224    6852 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:11.382227    6852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.382365    6852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:11.382477    6852 out.go:303] Setting JSON to false
	I1201 10:10:11.382491    6852 mustload.go:65] Loading cluster: multinode-486000
	I1201 10:10:11.382536    6852 notify.go:220] Checking for updates...
	I1201 10:10:11.382692    6852 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:11.382697    6852 status.go:255] checking status of multinode-486000 ...
	I1201 10:10:11.382941    6852 status.go:330] multinode-486000 host status = "Stopped" (err=<nil>)
	I1201 10:10:11.382945    6852 status.go:343] host is not running, skipping remaining checks
	I1201 10:10:11.382948    6852 status.go:257] multinode-486000 status: &{Name:multinode-486000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr": multinode-486000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-486000 status --alsologtostderr": multinode-486000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (33.061958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-486000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-486000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.19921875s)

                                                
                                                
-- stdout --
	* [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-486000 in cluster multinode-486000
	* Restarting existing qemu2 VM for "multinode-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-486000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:11.447548    6856 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:11.447695    6856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.447698    6856 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:11.447701    6856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:11.447820    6856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:11.448953    6856 out.go:303] Setting JSON to false
	I1201 10:10:11.465447    6856 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2385,"bootTime":1701451826,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:10:11.465558    6856 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:10:11.470472    6856 out.go:177] * [multinode-486000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:10:11.471992    6856 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:10:11.472063    6856 notify.go:220] Checking for updates...
	I1201 10:10:11.481428    6856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:10:11.484393    6856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:10:11.487383    6856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:10:11.490478    6856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:10:11.492056    6856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:10:11.495680    6856 config.go:182] Loaded profile config "multinode-486000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:11.495951    6856 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:10:11.500431    6856 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:10:11.506376    6856 start.go:298] selected driver: qemu2
	I1201 10:10:11.506380    6856 start.go:902] validating driver "qemu2" against &{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:10:11.506432    6856 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:10:11.508823    6856 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:10:11.508862    6856 cni.go:84] Creating CNI manager for ""
	I1201 10:10:11.508867    6856 cni.go:136] 1 nodes found, recommending kindnet
	I1201 10:10:11.508873    6856 start_flags.go:323] config:
	{Name:multinode-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-486000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:10:11.513613    6856 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:11.520398    6856 out.go:177] * Starting control plane node multinode-486000 in cluster multinode-486000
	I1201 10:10:11.524461    6856 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:10:11.524483    6856 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:10:11.524490    6856 cache.go:56] Caching tarball of preloaded images
	I1201 10:10:11.524533    6856 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:10:11.524538    6856 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:10:11.524594    6856 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/multinode-486000/config.json ...
	I1201 10:10:11.525034    6856 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:11.525060    6856 start.go:369] acquired machines lock for "multinode-486000" in 20.208µs
	I1201 10:10:11.525069    6856 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:10:11.525074    6856 fix.go:54] fixHost starting: 
	I1201 10:10:11.525187    6856 fix.go:102] recreateIfNeeded on multinode-486000: state=Stopped err=<nil>
	W1201 10:10:11.525194    6856 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:10:11.529426    6856 out.go:177] * Restarting existing qemu2 VM for "multinode-486000" ...
	I1201 10:10:11.537463    6856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:ec:0e:0a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:10:11.539680    6856 main.go:141] libmachine: STDOUT: 
	I1201 10:10:11.539703    6856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:11.539732    6856 fix.go:56] fixHost completed within 14.656292ms
	I1201 10:10:11.539736    6856 start.go:83] releasing machines lock for "multinode-486000", held for 14.672666ms
	W1201 10:10:11.539745    6856 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:10:11.539794    6856 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:11.539799    6856 start.go:709] Will try again in 5 seconds ...
	I1201 10:10:16.541853    6856 start.go:365] acquiring machines lock for multinode-486000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:16.542265    6856 start.go:369] acquired machines lock for "multinode-486000" in 325.166µs
	I1201 10:10:16.542373    6856 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:10:16.542400    6856 fix.go:54] fixHost starting: 
	I1201 10:10:16.543109    6856 fix.go:102] recreateIfNeeded on multinode-486000: state=Stopped err=<nil>
	W1201 10:10:16.543137    6856 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:10:16.560642    6856 out.go:177] * Restarting existing qemu2 VM for "multinode-486000" ...
	I1201 10:10:16.568062    6856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:61:ec:0e:0a:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/multinode-486000/disk.qcow2
	I1201 10:10:16.577530    6856 main.go:141] libmachine: STDOUT: 
	I1201 10:10:16.577598    6856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:16.577716    6856 fix.go:56] fixHost completed within 35.32275ms
	I1201 10:10:16.577733    6856 start.go:83] releasing machines lock for "multinode-486000", held for 35.444708ms
	W1201 10:10:16.577913    6856 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-486000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:16.586644    6856 out.go:177] 
	W1201 10:10:16.589658    6856 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:10:16.589686    6856 out.go:239] * 
	* 
	W1201 10:10:16.592331    6856 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:10:16.602591    6856 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-486000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (68.99375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-486000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-486000-m01 --driver=qemu2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-486000-m01 --driver=qemu2 : exit status 80 (10.068622417s)

                                                
                                                
-- stdout --
	* [multinode-486000-m01] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-486000-m01 in cluster multinode-486000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-486000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-486000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-486000-m02 --driver=qemu2 
multinode_test.go:488: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-486000-m02 --driver=qemu2 : exit status 80 (9.948444167s)

                                                
                                                
-- stdout --
	* [multinode-486000-m02] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-486000-m02 in cluster multinode-486000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-486000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-486000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:490: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-486000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-486000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-486000: exit status 89 (85.457375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-486000"

                                                
                                                
-- /stdout --
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-486000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-486000 -n multinode-486000: exit status 7 (32.5935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-486000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.28s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-478000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-478000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.855458875s)

                                                
                                                
-- stdout --
	* [test-preload-478000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-478000 in cluster test-preload-478000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-478000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:10:37.132840    6912 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:10:37.132970    6912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:37.132973    6912 out.go:309] Setting ErrFile to fd 2...
	I1201 10:10:37.132976    6912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:10:37.133097    6912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:10:37.134226    6912 out.go:303] Setting JSON to false
	I1201 10:10:37.150295    6912 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2411,"bootTime":1701451826,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:10:37.150363    6912 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:10:37.156500    6912 out.go:177] * [test-preload-478000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:10:37.169468    6912 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:10:37.164468    6912 notify.go:220] Checking for updates...
	I1201 10:10:37.176386    6912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:10:37.180419    6912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:10:37.188373    6912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:10:37.196359    6912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:10:37.200365    6912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:10:37.203880    6912 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:10:37.203945    6912 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:10:37.208410    6912 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:10:37.215230    6912 start.go:298] selected driver: qemu2
	I1201 10:10:37.215237    6912 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:10:37.215243    6912 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:10:37.217868    6912 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:10:37.221407    6912 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:10:37.225500    6912 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:10:37.225540    6912 cni.go:84] Creating CNI manager for ""
	I1201 10:10:37.225548    6912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:10:37.225552    6912 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:10:37.225557    6912 start_flags.go:323] config:
	{Name:test-preload-478000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-478000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:10:37.230466    6912 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.237453    6912 out.go:177] * Starting control plane node test-preload-478000 in cluster test-preload-478000
	I1201 10:10:37.241494    6912 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1201 10:10:37.241611    6912 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/test-preload-478000/config.json ...
	I1201 10:10:37.241627    6912 cache.go:107] acquiring lock: {Name:mk77e14012bc5feecacbad696729eadaa024d606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241661    6912 cache.go:107] acquiring lock: {Name:mk9a1f3c3e4539a6fedc26d50bf7d30b45b59878 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241661    6912 cache.go:107] acquiring lock: {Name:mkab94e9062a19a846267725f1803b0011629e2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241670    6912 cache.go:107] acquiring lock: {Name:mk436c54afe54cc7d77b32d26516919488e22d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241649    6912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/test-preload-478000/config.json: {Name:mke54a02d0c9a062a342f1d59b04a4e2504d2201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:10:37.241773    6912 cache.go:107] acquiring lock: {Name:mk8acf8528198c133961e2ed951434a3a765639e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241813    6912 cache.go:107] acquiring lock: {Name:mk4e220995a74e137de4fceb00b12a6d4d87e442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241818    6912 cache.go:107] acquiring lock: {Name:mk8c3972dc653bfb52f81716d5f7834bc04e015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.241922    6912 cache.go:107] acquiring lock: {Name:mkb372da7e1c0e57708a730f90d5ba40f4fae39e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:10:37.242018    6912 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1201 10:10:37.242029    6912 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1201 10:10:37.242047    6912 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1201 10:10:37.242050    6912 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1201 10:10:37.242121    6912 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 10:10:37.242141    6912 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1201 10:10:37.242030    6912 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1201 10:10:37.242236    6912 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1201 10:10:37.242349    6912 start.go:365] acquiring machines lock for test-preload-478000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:37.242393    6912 start.go:369] acquired machines lock for "test-preload-478000" in 34.375µs
	I1201 10:10:37.242410    6912 start.go:93] Provisioning new machine with config: &{Name:test-preload-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-478000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:10:37.242457    6912 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:10:37.250391    6912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:10:37.255178    6912 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1201 10:10:37.255903    6912 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1201 10:10:37.255935    6912 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1201 10:10:37.256547    6912 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1201 10:10:37.256726    6912 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1201 10:10:37.256872    6912 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 10:10:37.258580    6912 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1201 10:10:37.258628    6912 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1201 10:10:37.269152    6912 start.go:159] libmachine.API.Create for "test-preload-478000" (driver="qemu2")
	I1201 10:10:37.269177    6912 client.go:168] LocalClient.Create starting
	I1201 10:10:37.269270    6912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:10:37.269305    6912 main.go:141] libmachine: Decoding PEM data...
	I1201 10:10:37.269320    6912 main.go:141] libmachine: Parsing certificate...
	I1201 10:10:37.269364    6912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:10:37.269388    6912 main.go:141] libmachine: Decoding PEM data...
	I1201 10:10:37.269396    6912 main.go:141] libmachine: Parsing certificate...
	I1201 10:10:37.269866    6912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:10:37.412900    6912 main.go:141] libmachine: Creating SSH key...
	I1201 10:10:37.499076    6912 main.go:141] libmachine: Creating Disk image...
	I1201 10:10:37.499092    6912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:10:37.499304    6912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:37.512064    6912 main.go:141] libmachine: STDOUT: 
	I1201 10:10:37.512083    6912 main.go:141] libmachine: STDERR: 
	I1201 10:10:37.512137    6912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2 +20000M
	I1201 10:10:37.523636    6912 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:10:37.523667    6912 main.go:141] libmachine: STDERR: 
	I1201 10:10:37.523693    6912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:37.523718    6912 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:10:37.523793    6912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:06:44:e7:cc:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:37.526066    6912 main.go:141] libmachine: STDOUT: 
	I1201 10:10:37.526088    6912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:37.526106    6912 client.go:171] LocalClient.Create took 256.929333ms
	I1201 10:10:37.637707    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1201 10:10:37.646653    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1201 10:10:37.649174    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1201 10:10:37.649902    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1201 10:10:37.698564    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1201 10:10:37.702007    6912 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1201 10:10:37.702034    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1201 10:10:37.748179    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1201 10:10:37.850177    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1201 10:10:37.850228    6912 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 608.597125ms
	I1201 10:10:37.850274    6912 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1201 10:10:38.147234    6912 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1201 10:10:38.147338    6912 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1201 10:10:38.411375    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 10:10:38.411424    6912 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.1698235s
	I1201 10:10:38.411459    6912 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 10:10:39.246990    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1201 10:10:39.247070    6912 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.005328334s
	I1201 10:10:39.247126    6912 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1201 10:10:39.526424    6912 start.go:128] duration metric: createHost completed in 2.283992584s
	I1201 10:10:39.526486    6912 start.go:83] releasing machines lock for "test-preload-478000", held for 2.284135458s
	W1201 10:10:39.526539    6912 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:39.545720    6912 out.go:177] * Deleting "test-preload-478000" in qemu2 ...
	W1201 10:10:39.572134    6912 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:39.572170    6912 start.go:709] Will try again in 5 seconds ...
	I1201 10:10:40.805976    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1201 10:10:40.806031    6912 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.564318083s
	I1201 10:10:40.806061    6912 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1201 10:10:41.642251    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1201 10:10:41.642323    6912 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.4007905s
	I1201 10:10:41.642355    6912 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1201 10:10:42.967309    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1201 10:10:42.967359    6912 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.725779583s
	I1201 10:10:42.967396    6912 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1201 10:10:43.519898    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1201 10:10:43.519958    6912 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.278228792s
	I1201 10:10:43.520005    6912 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1201 10:10:44.572555    6912 start.go:365] acquiring machines lock for test-preload-478000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:10:44.572934    6912 start.go:369] acquired machines lock for "test-preload-478000" in 301.083µs
	I1201 10:10:44.573027    6912 start.go:93] Provisioning new machine with config: &{Name:test-preload-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-478000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:10:44.573285    6912 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:10:44.598941    6912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:10:44.645479    6912 start.go:159] libmachine.API.Create for "test-preload-478000" (driver="qemu2")
	I1201 10:10:44.645524    6912 client.go:168] LocalClient.Create starting
	I1201 10:10:44.645687    6912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:10:44.645763    6912 main.go:141] libmachine: Decoding PEM data...
	I1201 10:10:44.645785    6912 main.go:141] libmachine: Parsing certificate...
	I1201 10:10:44.645880    6912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:10:44.645934    6912 main.go:141] libmachine: Decoding PEM data...
	I1201 10:10:44.645952    6912 main.go:141] libmachine: Parsing certificate...
	I1201 10:10:44.646559    6912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:10:44.791364    6912 main.go:141] libmachine: Creating SSH key...
	I1201 10:10:44.874294    6912 main.go:141] libmachine: Creating Disk image...
	I1201 10:10:44.874301    6912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:10:44.874465    6912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:44.886479    6912 main.go:141] libmachine: STDOUT: 
	I1201 10:10:44.886500    6912 main.go:141] libmachine: STDERR: 
	I1201 10:10:44.886551    6912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2 +20000M
	I1201 10:10:44.897250    6912 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:10:44.897269    6912 main.go:141] libmachine: STDERR: 
	I1201 10:10:44.897284    6912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:44.897289    6912 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:10:44.897325    6912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:8a:9b:45:a8:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/test-preload-478000/disk.qcow2
	I1201 10:10:44.899083    6912 main.go:141] libmachine: STDOUT: 
	I1201 10:10:44.899101    6912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:10:44.899119    6912 client.go:171] LocalClient.Create took 253.59075ms
	I1201 10:10:46.119157    6912 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1201 10:10:46.119233    6912 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.877775833s
	I1201 10:10:46.119260    6912 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1201 10:10:46.119314    6912 cache.go:87] Successfully saved all images to host disk.
	I1201 10:10:46.901336    6912 start.go:128] duration metric: createHost completed in 2.32805725s
	I1201 10:10:46.901402    6912 start.go:83] releasing machines lock for "test-preload-478000", held for 2.328499167s
	W1201 10:10:46.901714    6912 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-478000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-478000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:10:46.918194    6912 out.go:177] 
	W1201 10:10:46.928462    6912 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:10:46.928504    6912 out.go:239] * 
	* 
	W1201 10:10:46.931165    6912 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:10:46.943278    6912 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-478000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-12-01 10:10:46.9612 -0800 PST m=+461.001490210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-478000 -n test-preload-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-478000 -n test-preload-478000: exit status 7 (68.348625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-478000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-478000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-478000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-828000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-828000 --memory=2048 --driver=qemu2 : exit status 80 (9.842209208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-828000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-828000 in cluster scheduled-stop-828000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-828000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-828000 in cluster scheduled-stop-828000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-12-01 10:10:56.97659 -0800 PST m=+471.017118210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-828000 -n scheduled-stop-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-828000 -n scheduled-stop-828000: exit status 7 (69.15175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-828000
--- FAIL: TestScheduledStopUnix (10.01s)

                                                
                                    
x
+
TestSkaffold (12.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2057919546 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-807000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-807000 --memory=2600 --driver=qemu2 : exit status 80 (9.989590416s)

                                                
                                                
-- stdout --
	* [skaffold-807000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-807000 in cluster skaffold-807000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-807000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-807000 in cluster skaffold-807000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-12-01 10:11:09.129829 -0800 PST m=+483.170645710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-807000 -n skaffold-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-807000 -n skaffold-807000: exit status 7 (63.051042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-807000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-807000
--- FAIL: TestSkaffold (12.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (145.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-01 10:14:14.768806 -0800 PST m=+668.814037501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-660000 -n running-upgrade-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-660000 -n running-upgrade-660000: exit status 85 (113.451917ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-660000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-660000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-660000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-660000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-660000\"")
helpers_test.go:175: Cleaning up "running-upgrade-660000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-660000
--- FAIL: TestRunningBinaryUpgrade (145.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.806766292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-512000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-512000 in cluster kubernetes-upgrade-512000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:14:15.149398    7357 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:14:15.149549    7357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:14:15.149554    7357 out.go:309] Setting ErrFile to fd 2...
	I1201 10:14:15.149557    7357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:14:15.149674    7357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:14:15.150798    7357 out.go:303] Setting JSON to false
	I1201 10:14:15.166780    7357 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2629,"bootTime":1701451826,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:14:15.166875    7357 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:14:15.171776    7357 out.go:177] * [kubernetes-upgrade-512000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:14:15.183676    7357 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:14:15.180814    7357 notify.go:220] Checking for updates...
	I1201 10:14:15.189701    7357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:14:15.198722    7357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:14:15.202725    7357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:14:15.206650    7357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:14:15.213652    7357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:14:15.218066    7357 config.go:182] Loaded profile config "cert-expiration-650000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:14:15.218126    7357 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:14:15.218179    7357 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:14:15.221668    7357 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:14:15.228663    7357 start.go:298] selected driver: qemu2
	I1201 10:14:15.228670    7357 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:14:15.228674    7357 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:14:15.231150    7357 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:14:15.234726    7357 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:14:15.237736    7357 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 10:14:15.237772    7357 cni.go:84] Creating CNI manager for ""
	I1201 10:14:15.237779    7357 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:14:15.237782    7357 start_flags.go:323] config:
	{Name:kubernetes-upgrade-512000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:14:15.242559    7357 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:14:15.248033    7357 out.go:177] * Starting control plane node kubernetes-upgrade-512000 in cluster kubernetes-upgrade-512000
	I1201 10:14:15.250743    7357 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:14:15.250775    7357 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:14:15.250785    7357 cache.go:56] Caching tarball of preloaded images
	I1201 10:14:15.250863    7357 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:14:15.250870    7357 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1201 10:14:15.250953    7357 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kubernetes-upgrade-512000/config.json ...
	I1201 10:14:15.250964    7357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kubernetes-upgrade-512000/config.json: {Name:mk88899ead321481ebf0c76037d0829d3f67fd94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:14:15.251320    7357 start.go:365] acquiring machines lock for kubernetes-upgrade-512000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:14:15.251357    7357 start.go:369] acquired machines lock for "kubernetes-upgrade-512000" in 27.583µs
	I1201 10:14:15.251370    7357 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:14:15.251406    7357 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:14:15.254646    7357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:14:15.271240    7357 start.go:159] libmachine.API.Create for "kubernetes-upgrade-512000" (driver="qemu2")
	I1201 10:14:15.271276    7357 client.go:168] LocalClient.Create starting
	I1201 10:14:15.271343    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:14:15.271375    7357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:14:15.271387    7357 main.go:141] libmachine: Parsing certificate...
	I1201 10:14:15.271423    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:14:15.271445    7357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:14:15.271452    7357 main.go:141] libmachine: Parsing certificate...
	I1201 10:14:15.271771    7357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:14:15.402415    7357 main.go:141] libmachine: Creating SSH key...
	I1201 10:14:15.490927    7357 main.go:141] libmachine: Creating Disk image...
	I1201 10:14:15.490934    7357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:14:15.491099    7357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:15.503255    7357 main.go:141] libmachine: STDOUT: 
	I1201 10:14:15.503280    7357 main.go:141] libmachine: STDERR: 
	I1201 10:14:15.503335    7357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2 +20000M
	I1201 10:14:15.513881    7357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:14:15.513895    7357 main.go:141] libmachine: STDERR: 
	I1201 10:14:15.513920    7357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:15.513925    7357 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:14:15.513957    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ea:99:cb:f3:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:15.515609    7357 main.go:141] libmachine: STDOUT: 
	I1201 10:14:15.515643    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:14:15.515665    7357 client.go:171] LocalClient.Create took 244.388958ms
	I1201 10:14:17.517793    7357 start.go:128] duration metric: createHost completed in 2.2664235s
	I1201 10:14:17.517847    7357 start.go:83] releasing machines lock for "kubernetes-upgrade-512000", held for 2.266535208s
	W1201 10:14:17.517915    7357 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:14:17.534715    7357 out.go:177] * Deleting "kubernetes-upgrade-512000" in qemu2 ...
	W1201 10:14:17.566700    7357 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:14:17.566739    7357 start.go:709] Will try again in 5 seconds ...
	I1201 10:14:22.568661    7357 start.go:365] acquiring machines lock for kubernetes-upgrade-512000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:14:22.569053    7357 start.go:369] acquired machines lock for "kubernetes-upgrade-512000" in 289.208µs
	I1201 10:14:22.569160    7357 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:14:22.569353    7357 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:14:22.589028    7357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:14:22.636825    7357 start.go:159] libmachine.API.Create for "kubernetes-upgrade-512000" (driver="qemu2")
	I1201 10:14:22.636891    7357 client.go:168] LocalClient.Create starting
	I1201 10:14:22.637010    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:14:22.637069    7357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:14:22.637085    7357 main.go:141] libmachine: Parsing certificate...
	I1201 10:14:22.637144    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:14:22.637185    7357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:14:22.637200    7357 main.go:141] libmachine: Parsing certificate...
	I1201 10:14:22.637682    7357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:14:22.781461    7357 main.go:141] libmachine: Creating SSH key...
	I1201 10:14:22.836539    7357 main.go:141] libmachine: Creating Disk image...
	I1201 10:14:22.836544    7357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:14:22.836740    7357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:22.848542    7357 main.go:141] libmachine: STDOUT: 
	I1201 10:14:22.848578    7357 main.go:141] libmachine: STDERR: 
	I1201 10:14:22.848637    7357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2 +20000M
	I1201 10:14:22.859172    7357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:14:22.859192    7357 main.go:141] libmachine: STDERR: 
	I1201 10:14:22.859206    7357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:22.859212    7357 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:14:22.859257    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0a:a0:ac:ba:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:22.860874    7357 main.go:141] libmachine: STDOUT: 
	I1201 10:14:22.860904    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:14:22.860916    7357 client.go:171] LocalClient.Create took 224.023292ms
	I1201 10:14:24.863082    7357 start.go:128] duration metric: createHost completed in 2.293739166s
	I1201 10:14:24.863151    7357 start.go:83] releasing machines lock for "kubernetes-upgrade-512000", held for 2.294127333s
	W1201 10:14:24.863494    7357 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:14:24.892152    7357 out.go:177] 
	W1201 10:14:24.896299    7357 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:14:24.896321    7357 out.go:239] * 
	* 
	W1201 10:14:24.898391    7357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:14:24.910216    7357 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-512000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-512000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-512000 status --format={{.Host}}: exit status 7 (39.267042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.224353791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-512000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-512000 in cluster kubernetes-upgrade-512000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:14:25.102265    7377 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:14:25.102382    7377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:14:25.102386    7377 out.go:309] Setting ErrFile to fd 2...
	I1201 10:14:25.102389    7377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:14:25.102521    7377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:14:25.103583    7377 out.go:303] Setting JSON to false
	I1201 10:14:25.119794    7377 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2639,"bootTime":1701451826,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:14:25.119862    7377 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:14:25.125803    7377 out.go:177] * [kubernetes-upgrade-512000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:14:25.136770    7377 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:14:25.132972    7377 notify.go:220] Checking for updates...
	I1201 10:14:25.143795    7377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:14:25.151829    7377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:14:25.159808    7377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:14:25.167858    7377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:14:25.175833    7377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:14:25.180043    7377 config.go:182] Loaded profile config "kubernetes-upgrade-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1201 10:14:25.180330    7377 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:14:25.183829    7377 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:14:25.190823    7377 start.go:298] selected driver: qemu2
	I1201 10:14:25.190829    7377 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:14:25.190918    7377 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:14:25.193605    7377 cni.go:84] Creating CNI manager for ""
	I1201 10:14:25.193623    7377 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:14:25.193630    7377 start_flags.go:323] config:
	{Name:kubernetes-upgrade-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:kubernetes-upgrade-51200
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:14:25.198339    7377 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:14:25.206817    7377 out.go:177] * Starting control plane node kubernetes-upgrade-512000 in cluster kubernetes-upgrade-512000
	I1201 10:14:25.209876    7377 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:14:25.209902    7377 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1201 10:14:25.209911    7377 cache.go:56] Caching tarball of preloaded images
	I1201 10:14:25.209978    7377 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:14:25.209984    7377 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1201 10:14:25.210053    7377 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kubernetes-upgrade-512000/config.json ...
	I1201 10:14:25.212882    7377 start.go:365] acquiring machines lock for kubernetes-upgrade-512000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:14:25.212924    7377 start.go:369] acquired machines lock for "kubernetes-upgrade-512000" in 32.375µs
	I1201 10:14:25.212943    7377 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:14:25.212952    7377 fix.go:54] fixHost starting: 
	I1201 10:14:25.213097    7377 fix.go:102] recreateIfNeeded on kubernetes-upgrade-512000: state=Stopped err=<nil>
	W1201 10:14:25.213110    7377 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:14:25.224787    7377 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-512000" ...
	I1201 10:14:25.228902    7377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0a:a0:ac:ba:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:25.231368    7377 main.go:141] libmachine: STDOUT: 
	I1201 10:14:25.231394    7377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:14:25.231430    7377 fix.go:56] fixHost completed within 18.479209ms
	I1201 10:14:25.231435    7377 start.go:83] releasing machines lock for "kubernetes-upgrade-512000", held for 18.50525ms
	W1201 10:14:25.231444    7377 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:14:25.231481    7377 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:14:25.231487    7377 start.go:709] Will try again in 5 seconds ...
	I1201 10:14:30.233568    7377 start.go:365] acquiring machines lock for kubernetes-upgrade-512000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:14:30.234009    7377 start.go:369] acquired machines lock for "kubernetes-upgrade-512000" in 320.167µs
	I1201 10:14:30.234136    7377 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:14:30.234168    7377 fix.go:54] fixHost starting: 
	I1201 10:14:30.234818    7377 fix.go:102] recreateIfNeeded on kubernetes-upgrade-512000: state=Stopped err=<nil>
	W1201 10:14:30.234845    7377 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:14:30.240148    7377 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-512000" ...
	I1201 10:14:30.248281    7377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0a:a0:ac:ba:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubernetes-upgrade-512000/disk.qcow2
	I1201 10:14:30.257473    7377 main.go:141] libmachine: STDOUT: 
	I1201 10:14:30.257539    7377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:14:30.257674    7377 fix.go:56] fixHost completed within 23.445125ms
	I1201 10:14:30.257693    7377 start.go:83] releasing machines lock for "kubernetes-upgrade-512000", held for 23.659166ms
	W1201 10:14:30.257863    7377 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:14:30.266088    7377 out.go:177] 
	W1201 10:14:30.270179    7377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:14:30.270219    7377 out.go:239] * 
	* 
	W1201 10:14:30.272863    7377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:14:30.283087    7377 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-512000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-512000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-512000 version --output=json: exit status 1 (61.617709ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-512000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-12-01 10:14:30.36034 -0800 PST m=+684.405942210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-512000 -n kubernetes-upgrade-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-512000 -n kubernetes-upgrade-512000: exit status 7 (36.013667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-512000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-512000
--- FAIL: TestKubernetesUpgrade (15.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.61s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17703
- KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current165433205/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17703
- KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1646327887/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (144.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (144.51s)

                                                
                                    
x
+
TestPause/serial/Start (9.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-180000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-180000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.841088167s)

                                                
                                                
-- stdout --
	* [pause-180000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-180000 in cluster pause-180000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-180000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-180000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-180000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-180000 -n pause-180000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-180000 -n pause-180000: exit status 7 (56.44475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-180000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 : exit status 80 (9.937254833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-945000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-945000 in cluster NoKubernetes-945000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-945000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-945000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000: exit status 7 (71.165833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 : exit status 80 (5.28169525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-945000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-945000
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-945000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000: exit status 7 (70.844666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 : exit status 80 (5.271040041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-945000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-945000
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-945000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000: exit status 7 (69.493541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 : exit status 80 (5.28561775s)

                                                
                                                
-- stdout --
	* [NoKubernetes-945000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-945000
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-945000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-945000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-945000 -n NoKubernetes-945000: exit status 7 (68.578333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.974423s)

                                                
                                                
-- stdout --
	* [auto-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-384000 in cluster auto-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:15:26.575490    7505 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:15:26.575638    7505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:26.575641    7505 out.go:309] Setting ErrFile to fd 2...
	I1201 10:15:26.575644    7505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:26.575763    7505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:15:26.576899    7505 out.go:303] Setting JSON to false
	I1201 10:15:26.592957    7505 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2700,"bootTime":1701451826,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:15:26.593054    7505 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:15:26.597369    7505 out.go:177] * [auto-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:15:26.610284    7505 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:15:26.606325    7505 notify.go:220] Checking for updates...
	I1201 10:15:26.618287    7505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:15:26.626298    7505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:15:26.634231    7505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:15:26.642279    7505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:15:26.650225    7505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:15:26.654741    7505 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:15:26.654782    7505 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:15:26.659239    7505 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:15:26.666201    7505 start.go:298] selected driver: qemu2
	I1201 10:15:26.666207    7505 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:15:26.666213    7505 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:15:26.668809    7505 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:15:26.673243    7505 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:15:26.676343    7505 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:15:26.676379    7505 cni.go:84] Creating CNI manager for ""
	I1201 10:15:26.676386    7505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:15:26.676390    7505 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:15:26.676399    7505 start_flags.go:323] config:
	{Name:auto-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s GPUs:}
	I1201 10:15:26.681731    7505 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:15:26.689254    7505 out.go:177] * Starting control plane node auto-384000 in cluster auto-384000
	I1201 10:15:26.690568    7505 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:15:26.690596    7505 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:15:26.690605    7505 cache.go:56] Caching tarball of preloaded images
	I1201 10:15:26.690687    7505 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:15:26.690693    7505 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:15:26.690773    7505 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/auto-384000/config.json ...
	I1201 10:15:26.690784    7505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/auto-384000/config.json: {Name:mk3d8dfdb5b897151b68a47136cf2e1ddfeb3e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:15:26.690988    7505 start.go:365] acquiring machines lock for auto-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:26.691030    7505 start.go:369] acquired machines lock for "auto-384000" in 35.291µs
	I1201 10:15:26.691044    7505 start.go:93] Provisioning new machine with config: &{Name:auto-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:26.691082    7505 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:26.694294    7505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:26.712007    7505 start.go:159] libmachine.API.Create for "auto-384000" (driver="qemu2")
	I1201 10:15:26.712033    7505 client.go:168] LocalClient.Create starting
	I1201 10:15:26.712095    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:26.712127    7505 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:26.712137    7505 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:26.712181    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:26.712204    7505 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:26.712216    7505 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:26.712577    7505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:26.843291    7505 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:26.965919    7505 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:26.965924    7505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:26.966099    7505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:26.978287    7505 main.go:141] libmachine: STDOUT: 
	I1201 10:15:26.978311    7505 main.go:141] libmachine: STDERR: 
	I1201 10:15:26.978376    7505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2 +20000M
	I1201 10:15:26.988822    7505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:26.988844    7505 main.go:141] libmachine: STDERR: 
	I1201 10:15:26.988866    7505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:26.988871    7505 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:26.988912    7505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b4:ed:78:88:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:26.990626    7505 main.go:141] libmachine: STDOUT: 
	I1201 10:15:26.990645    7505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:26.990663    7505 client.go:171] LocalClient.Create took 278.63125ms
	I1201 10:15:28.992791    7505 start.go:128] duration metric: createHost completed in 2.301743291s
	I1201 10:15:28.992848    7505 start.go:83] releasing machines lock for "auto-384000", held for 2.301859292s
	W1201 10:15:28.992962    7505 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:29.011233    7505 out.go:177] * Deleting "auto-384000" in qemu2 ...
	W1201 10:15:29.031674    7505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:29.031710    7505 start.go:709] Will try again in 5 seconds ...
	I1201 10:15:34.033849    7505 start.go:365] acquiring machines lock for auto-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:34.034375    7505 start.go:369] acquired machines lock for "auto-384000" in 362.542µs
	I1201 10:15:34.034479    7505 start.go:93] Provisioning new machine with config: &{Name:auto-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:34.034719    7505 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:34.054466    7505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:34.104049    7505 start.go:159] libmachine.API.Create for "auto-384000" (driver="qemu2")
	I1201 10:15:34.104094    7505 client.go:168] LocalClient.Create starting
	I1201 10:15:34.104212    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:34.104269    7505 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:34.104297    7505 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:34.104357    7505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:34.104398    7505 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:34.104412    7505 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:34.104899    7505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:34.248440    7505 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:34.433463    7505 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:34.433473    7505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:34.433663    7505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:34.445989    7505 main.go:141] libmachine: STDOUT: 
	I1201 10:15:34.446015    7505 main.go:141] libmachine: STDERR: 
	I1201 10:15:34.446083    7505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2 +20000M
	I1201 10:15:34.456563    7505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:34.456582    7505 main.go:141] libmachine: STDERR: 
	I1201 10:15:34.456598    7505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:34.456608    7505 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:34.456649    7505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:26:59:e5:d3:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/auto-384000/disk.qcow2
	I1201 10:15:34.458340    7505 main.go:141] libmachine: STDOUT: 
	I1201 10:15:34.458363    7505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:34.458378    7505 client.go:171] LocalClient.Create took 354.284458ms
	I1201 10:15:36.460508    7505 start.go:128] duration metric: createHost completed in 2.42581975s
	I1201 10:15:36.460561    7505 start.go:83] releasing machines lock for "auto-384000", held for 2.426215625s
	W1201 10:15:36.461010    7505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:36.484812    7505 out.go:177] 
	W1201 10:15:36.489941    7505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:15:36.489976    7505 out.go:239] * 
	* 
	W1201 10:15:36.492584    7505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:15:36.504759    7505 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.848171166s)

                                                
                                                
-- stdout --
	* [kindnet-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-384000 in cluster kindnet-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:15:38.887148    7617 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:15:38.887283    7617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:38.887286    7617 out.go:309] Setting ErrFile to fd 2...
	I1201 10:15:38.887289    7617 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:38.887413    7617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:15:38.888480    7617 out.go:303] Setting JSON to false
	I1201 10:15:38.904395    7617 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2712,"bootTime":1701451826,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:15:38.904463    7617 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:15:38.911533    7617 out.go:177] * [kindnet-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:15:38.925509    7617 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:15:38.921541    7617 notify.go:220] Checking for updates...
	I1201 10:15:38.933465    7617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:15:38.936528    7617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:15:38.942468    7617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:15:38.950364    7617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:15:38.954465    7617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:15:38.957988    7617 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:15:38.958030    7617 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:15:38.962472    7617 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:15:38.969490    7617 start.go:298] selected driver: qemu2
	I1201 10:15:38.969503    7617 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:15:38.969510    7617 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:15:38.972094    7617 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:15:38.976397    7617 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:15:38.980598    7617 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:15:38.980650    7617 cni.go:84] Creating CNI manager for "kindnet"
	I1201 10:15:38.980655    7617 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1201 10:15:38.980660    7617 start_flags.go:323] config:
	{Name:kindnet-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:15:38.985655    7617 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:15:38.993457    7617 out.go:177] * Starting control plane node kindnet-384000 in cluster kindnet-384000
	I1201 10:15:38.997539    7617 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:15:38.997573    7617 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:15:38.997583    7617 cache.go:56] Caching tarball of preloaded images
	I1201 10:15:38.997654    7617 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:15:38.997661    7617 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:15:38.997746    7617 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kindnet-384000/config.json ...
	I1201 10:15:38.997759    7617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kindnet-384000/config.json: {Name:mk26b42211b6e050d016e428c2ad1b386fa56f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:15:38.998057    7617 start.go:365] acquiring machines lock for kindnet-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:38.998093    7617 start.go:369] acquired machines lock for "kindnet-384000" in 29.292µs
	I1201 10:15:38.998107    7617 start.go:93] Provisioning new machine with config: &{Name:kindnet-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:38.998144    7617 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:39.005457    7617 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:39.024602    7617 start.go:159] libmachine.API.Create for "kindnet-384000" (driver="qemu2")
	I1201 10:15:39.024632    7617 client.go:168] LocalClient.Create starting
	I1201 10:15:39.024716    7617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:39.024755    7617 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:39.024765    7617 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:39.024805    7617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:39.024832    7617 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:39.024841    7617 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:39.025213    7617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:39.157954    7617 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:39.212734    7617 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:39.212739    7617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:39.212893    7617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:39.224791    7617 main.go:141] libmachine: STDOUT: 
	I1201 10:15:39.224821    7617 main.go:141] libmachine: STDERR: 
	I1201 10:15:39.224881    7617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2 +20000M
	I1201 10:15:39.235470    7617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:39.235495    7617 main.go:141] libmachine: STDERR: 
	I1201 10:15:39.235512    7617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:39.235518    7617 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:39.235549    7617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a0:53:ad:aa:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:39.237219    7617 main.go:141] libmachine: STDOUT: 
	I1201 10:15:39.237243    7617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:39.237262    7617 client.go:171] LocalClient.Create took 212.628917ms
	I1201 10:15:41.239398    7617 start.go:128] duration metric: createHost completed in 2.241289s
	I1201 10:15:41.239465    7617 start.go:83] releasing machines lock for "kindnet-384000", held for 2.241416s
	W1201 10:15:41.239516    7617 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:41.253684    7617 out.go:177] * Deleting "kindnet-384000" in qemu2 ...
	W1201 10:15:41.284278    7617 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:41.284312    7617 start.go:709] Will try again in 5 seconds ...
	I1201 10:15:46.286461    7617 start.go:365] acquiring machines lock for kindnet-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:46.287016    7617 start.go:369] acquired machines lock for "kindnet-384000" in 399.542µs
	I1201 10:15:46.287136    7617 start.go:93] Provisioning new machine with config: &{Name:kindnet-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:46.287396    7617 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:46.297166    7617 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:46.346193    7617 start.go:159] libmachine.API.Create for "kindnet-384000" (driver="qemu2")
	I1201 10:15:46.346231    7617 client.go:168] LocalClient.Create starting
	I1201 10:15:46.346370    7617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:46.346433    7617 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:46.346459    7617 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:46.346516    7617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:46.346558    7617 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:46.346578    7617 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:46.347076    7617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:46.489772    7617 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:46.620000    7617 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:46.620009    7617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:46.620171    7617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:46.632125    7617 main.go:141] libmachine: STDOUT: 
	I1201 10:15:46.632147    7617 main.go:141] libmachine: STDERR: 
	I1201 10:15:46.632212    7617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2 +20000M
	I1201 10:15:46.642781    7617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:46.642802    7617 main.go:141] libmachine: STDERR: 
	I1201 10:15:46.642825    7617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:46.642831    7617 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:46.642872    7617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:42:c9:09:a6:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kindnet-384000/disk.qcow2
	I1201 10:15:46.644633    7617 main.go:141] libmachine: STDOUT: 
	I1201 10:15:46.644649    7617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:46.644664    7617 client.go:171] LocalClient.Create took 298.435625ms
	I1201 10:15:48.646786    7617 start.go:128] duration metric: createHost completed in 2.359422167s
	I1201 10:15:48.646851    7617 start.go:83] releasing machines lock for "kindnet-384000", held for 2.359866792s
	W1201 10:15:48.647257    7617 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:48.670932    7617 out.go:177] 
	W1201 10:15:48.676016    7617 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:15:48.676049    7617 out.go:239] * 
	* 
	W1201 10:15:48.677864    7617 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:15:48.691937    7617 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.890345667s)

                                                
                                                
-- stdout --
	* [calico-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-384000 in cluster calico-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:15:51.162096    7741 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:15:51.162249    7741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:51.162252    7741 out.go:309] Setting ErrFile to fd 2...
	I1201 10:15:51.162255    7741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:15:51.162369    7741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:15:51.163462    7741 out.go:303] Setting JSON to false
	I1201 10:15:51.179469    7741 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2725,"bootTime":1701451826,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:15:51.179553    7741 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:15:51.187016    7741 out.go:177] * [calico-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:15:51.191913    7741 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:15:51.188854    7741 notify.go:220] Checking for updates...
	I1201 10:15:51.200908    7741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:15:51.206950    7741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:15:51.210906    7741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:15:51.218911    7741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:15:51.226934    7741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:15:51.230263    7741 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:15:51.230309    7741 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:15:51.234895    7741 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:15:51.240914    7741 start.go:298] selected driver: qemu2
	I1201 10:15:51.240921    7741 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:15:51.240926    7741 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:15:51.243371    7741 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:15:51.247890    7741 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:15:51.250952    7741 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:15:51.250988    7741 cni.go:84] Creating CNI manager for "calico"
	I1201 10:15:51.250992    7741 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1201 10:15:51.251000    7741 start_flags.go:323] config:
	{Name:calico-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:15:51.255831    7741 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:15:51.263033    7741 out.go:177] * Starting control plane node calico-384000 in cluster calico-384000
	I1201 10:15:51.266901    7741 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:15:51.266931    7741 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:15:51.266940    7741 cache.go:56] Caching tarball of preloaded images
	I1201 10:15:51.267015    7741 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:15:51.267021    7741 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:15:51.267107    7741 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/calico-384000/config.json ...
	I1201 10:15:51.267119    7741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/calico-384000/config.json: {Name:mkcb97eb7ee04b6d0edfc2e21c4e740902b1cd67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:15:51.267431    7741 start.go:365] acquiring machines lock for calico-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:51.267463    7741 start.go:369] acquired machines lock for "calico-384000" in 26.542µs
	I1201 10:15:51.267476    7741 start.go:93] Provisioning new machine with config: &{Name:calico-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:51.267523    7741 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:51.270916    7741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:51.288401    7741 start.go:159] libmachine.API.Create for "calico-384000" (driver="qemu2")
	I1201 10:15:51.288424    7741 client.go:168] LocalClient.Create starting
	I1201 10:15:51.288491    7741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:51.288520    7741 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:51.288534    7741 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:51.288571    7741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:51.288593    7741 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:51.288600    7741 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:51.288936    7741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:51.420435    7741 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:51.492429    7741 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:51.492435    7741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:51.492592    7741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:51.504523    7741 main.go:141] libmachine: STDOUT: 
	I1201 10:15:51.504548    7741 main.go:141] libmachine: STDERR: 
	I1201 10:15:51.504610    7741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2 +20000M
	I1201 10:15:51.515112    7741 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:51.515126    7741 main.go:141] libmachine: STDERR: 
	I1201 10:15:51.515146    7741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:51.515151    7741 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:51.515184    7741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:34:16:e8:c6:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:51.516821    7741 main.go:141] libmachine: STDOUT: 
	I1201 10:15:51.516838    7741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:51.516858    7741 client.go:171] LocalClient.Create took 228.431292ms
	I1201 10:15:53.519040    7741 start.go:128] duration metric: createHost completed in 2.251541042s
	I1201 10:15:53.519121    7741 start.go:83] releasing machines lock for "calico-384000", held for 2.251701708s
	W1201 10:15:53.519171    7741 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:53.541317    7741 out.go:177] * Deleting "calico-384000" in qemu2 ...
	W1201 10:15:53.569051    7741 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:15:53.569090    7741 start.go:709] Will try again in 5 seconds ...
	I1201 10:15:58.571217    7741 start.go:365] acquiring machines lock for calico-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:15:58.571584    7741 start.go:369] acquired machines lock for "calico-384000" in 285.792µs
	I1201 10:15:58.571690    7741 start.go:93] Provisioning new machine with config: &{Name:calico-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:15:58.571953    7741 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:15:58.592675    7741 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:15:58.639872    7741 start.go:159] libmachine.API.Create for "calico-384000" (driver="qemu2")
	I1201 10:15:58.639926    7741 client.go:168] LocalClient.Create starting
	I1201 10:15:58.640053    7741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:15:58.640129    7741 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:58.640153    7741 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:58.640241    7741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:15:58.640299    7741 main.go:141] libmachine: Decoding PEM data...
	I1201 10:15:58.640317    7741 main.go:141] libmachine: Parsing certificate...
	I1201 10:15:58.640846    7741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:15:58.783658    7741 main.go:141] libmachine: Creating SSH key...
	I1201 10:15:58.953309    7741 main.go:141] libmachine: Creating Disk image...
	I1201 10:15:58.953319    7741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:15:58.953493    7741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:58.965835    7741 main.go:141] libmachine: STDOUT: 
	I1201 10:15:58.965857    7741 main.go:141] libmachine: STDERR: 
	I1201 10:15:58.965921    7741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2 +20000M
	I1201 10:15:58.976437    7741 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:15:58.976453    7741 main.go:141] libmachine: STDERR: 
	I1201 10:15:58.976471    7741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:58.976479    7741 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:15:58.976546    7741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:10:01:e6:08:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/calico-384000/disk.qcow2
	I1201 10:15:58.978166    7741 main.go:141] libmachine: STDOUT: 
	I1201 10:15:58.978182    7741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:15:58.978195    7741 client.go:171] LocalClient.Create took 338.271292ms
	I1201 10:16:00.980333    7741 start.go:128] duration metric: createHost completed in 2.408389125s
	I1201 10:16:00.980406    7741 start.go:83] releasing machines lock for "calico-384000", held for 2.408857166s
	W1201 10:16:00.980840    7741 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:00.991684    7741 out.go:177] 
	W1201 10:16:00.997818    7741 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:16:00.997865    7741 out.go:239] * 
	* 
	W1201 10:16:01.000476    7741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:16:01.008677    7741 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.912804833s)

                                                
                                                
-- stdout --
	* [custom-flannel-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-384000 in cluster custom-flannel-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:03.576506    7862 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:03.576645    7862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:03.576649    7862 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:03.576651    7862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:03.576761    7862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:03.577835    7862 out.go:303] Setting JSON to false
	I1201 10:16:03.593672    7862 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2737,"bootTime":1701451826,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:03.593757    7862 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:03.599010    7862 out.go:177] * [custom-flannel-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:03.607046    7862 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:03.611008    7862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:03.607090    7862 notify.go:220] Checking for updates...
	I1201 10:16:03.617948    7862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:03.622005    7862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:03.625006    7862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:03.627942    7862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:03.631348    7862 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:03.631395    7862 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:03.635011    7862 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:03.641987    7862 start.go:298] selected driver: qemu2
	I1201 10:16:03.641994    7862 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:03.642000    7862 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:03.644357    7862 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:03.647018    7862 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:03.651028    7862 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:03.651061    7862 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1201 10:16:03.651071    7862 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1201 10:16:03.651078    7862 start_flags.go:323] config:
	{Name:custom-flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:03.655560    7862 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:03.662964    7862 out.go:177] * Starting control plane node custom-flannel-384000 in cluster custom-flannel-384000
	I1201 10:16:03.666942    7862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:03.666964    7862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:03.666972    7862 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:03.667029    7862 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:03.667035    7862 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:03.667092    7862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/custom-flannel-384000/config.json ...
	I1201 10:16:03.667103    7862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/custom-flannel-384000/config.json: {Name:mk5eb3c4962f06b48d47a67f43fad96d5288143c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:03.667326    7862 start.go:365] acquiring machines lock for custom-flannel-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:03.667366    7862 start.go:369] acquired machines lock for "custom-flannel-384000" in 29.25µs
	I1201 10:16:03.667382    7862 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:03.667421    7862 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:03.671019    7862 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:03.687808    7862 start.go:159] libmachine.API.Create for "custom-flannel-384000" (driver="qemu2")
	I1201 10:16:03.687839    7862 client.go:168] LocalClient.Create starting
	I1201 10:16:03.687904    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:03.687934    7862 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:03.687945    7862 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:03.687995    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:03.688017    7862 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:03.688024    7862 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:03.688372    7862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:03.819325    7862 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:03.962362    7862 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:03.962369    7862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:03.962546    7862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:03.974760    7862 main.go:141] libmachine: STDOUT: 
	I1201 10:16:03.974780    7862 main.go:141] libmachine: STDERR: 
	I1201 10:16:03.974847    7862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2 +20000M
	I1201 10:16:03.985449    7862 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:03.985467    7862 main.go:141] libmachine: STDERR: 
	I1201 10:16:03.985497    7862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:03.985503    7862 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:03.985538    7862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:35:9a:e3:81:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:03.987179    7862 main.go:141] libmachine: STDOUT: 
	I1201 10:16:03.987198    7862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:03.987218    7862 client.go:171] LocalClient.Create took 299.380208ms
	I1201 10:16:05.989355    7862 start.go:128] duration metric: createHost completed in 2.321966417s
	I1201 10:16:05.989471    7862 start.go:83] releasing machines lock for "custom-flannel-384000", held for 2.322095292s
	W1201 10:16:05.989536    7862 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:06.012669    7862 out.go:177] * Deleting "custom-flannel-384000" in qemu2 ...
	W1201 10:16:06.039301    7862 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:06.039333    7862 start.go:709] Will try again in 5 seconds ...
	I1201 10:16:11.041377    7862 start.go:365] acquiring machines lock for custom-flannel-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:11.041728    7862 start.go:369] acquired machines lock for "custom-flannel-384000" in 265.917µs
	I1201 10:16:11.041822    7862 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:11.042044    7862 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:11.047852    7862 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:11.095958    7862 start.go:159] libmachine.API.Create for "custom-flannel-384000" (driver="qemu2")
	I1201 10:16:11.096010    7862 client.go:168] LocalClient.Create starting
	I1201 10:16:11.096146    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:11.096222    7862 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:11.096248    7862 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:11.096311    7862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:11.096373    7862 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:11.096388    7862 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:11.096941    7862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:11.245198    7862 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:11.372611    7862 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:11.372624    7862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:11.372795    7862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:11.384945    7862 main.go:141] libmachine: STDOUT: 
	I1201 10:16:11.384974    7862 main.go:141] libmachine: STDERR: 
	I1201 10:16:11.385031    7862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2 +20000M
	I1201 10:16:11.395791    7862 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:11.395817    7862 main.go:141] libmachine: STDERR: 
	I1201 10:16:11.395832    7862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:11.395836    7862 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:11.395884    7862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:13:bf:60:b1:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/custom-flannel-384000/disk.qcow2
	I1201 10:16:11.397585    7862 main.go:141] libmachine: STDOUT: 
	I1201 10:16:11.397604    7862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:11.397618    7862 client.go:171] LocalClient.Create took 301.609834ms
	I1201 10:16:13.399738    7862 start.go:128] duration metric: createHost completed in 2.357699542s
	I1201 10:16:13.399805    7862 start.go:83] releasing machines lock for "custom-flannel-384000", held for 2.358110125s
	W1201 10:16:13.400179    7862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:13.422045    7862 out.go:177] 
	W1201 10:16:13.427053    7862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:16:13.427093    7862 out.go:239] * 
	* 
	W1201 10:16:13.429981    7862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:16:13.445011    7862 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.951614459s)

                                                
                                                
-- stdout --
	* [false-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-384000 in cluster false-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:15.997096    7984 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:15.997224    7984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:15.997226    7984 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:15.997229    7984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:15.997347    7984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:15.998468    7984 out.go:303] Setting JSON to false
	I1201 10:16:16.014424    7984 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2750,"bootTime":1701451826,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:16.014511    7984 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:16.020960    7984 out.go:177] * [false-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:16.027939    7984 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:16.024009    7984 notify.go:220] Checking for updates...
	I1201 10:16:16.036948    7984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:16.043967    7984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:16.047926    7984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:16.050953    7984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:16.057924    7984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:16.062306    7984 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:16.062359    7984 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:16.065860    7984 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:16.072973    7984 start.go:298] selected driver: qemu2
	I1201 10:16:16.072981    7984 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:16.072987    7984 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:16.075444    7984 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:16.079904    7984 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:16.083056    7984 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:16.083091    7984 cni.go:84] Creating CNI manager for "false"
	I1201 10:16:16.083096    7984 start_flags.go:323] config:
	{Name:false-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:16.087907    7984 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:16.096075    7984 out.go:177] * Starting control plane node false-384000 in cluster false-384000
	I1201 10:16:16.099982    7984 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:16.100016    7984 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:16.100031    7984 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:16.100126    7984 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:16.100134    7984 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:16.100219    7984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/false-384000/config.json ...
	I1201 10:16:16.100243    7984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/false-384000/config.json: {Name:mk76fb0306780f2e2dae6947c7fa60a75916be4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:16.100505    7984 start.go:365] acquiring machines lock for false-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:16.100547    7984 start.go:369] acquired machines lock for "false-384000" in 35.417µs
	I1201 10:16:16.100562    7984 start.go:93] Provisioning new machine with config: &{Name:false-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:16.100592    7984 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:16.104940    7984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:16.123257    7984 start.go:159] libmachine.API.Create for "false-384000" (driver="qemu2")
	I1201 10:16:16.123290    7984 client.go:168] LocalClient.Create starting
	I1201 10:16:16.123361    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:16.123397    7984 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:16.123408    7984 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:16.123451    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:16.123475    7984 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:16.123484    7984 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:16.123848    7984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:16.254298    7984 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:16.402592    7984 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:16.402598    7984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:16.402777    7984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:16.415089    7984 main.go:141] libmachine: STDOUT: 
	I1201 10:16:16.415107    7984 main.go:141] libmachine: STDERR: 
	I1201 10:16:16.415155    7984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2 +20000M
	I1201 10:16:16.425607    7984 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:16.425635    7984 main.go:141] libmachine: STDERR: 
	I1201 10:16:16.425650    7984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:16.425656    7984 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:16.425687    7984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:74:c8:7a:0b:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:16.427343    7984 main.go:141] libmachine: STDOUT: 
	I1201 10:16:16.427358    7984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:16.427377    7984 client.go:171] LocalClient.Create took 304.088083ms
	I1201 10:16:18.429503    7984 start.go:128] duration metric: createHost completed in 2.328949167s
	I1201 10:16:18.429559    7984 start.go:83] releasing machines lock for "false-384000", held for 2.329057292s
	W1201 10:16:18.429642    7984 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:18.455743    7984 out.go:177] * Deleting "false-384000" in qemu2 ...
	W1201 10:16:18.480138    7984 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:18.480172    7984 start.go:709] Will try again in 5 seconds ...
	I1201 10:16:23.482294    7984 start.go:365] acquiring machines lock for false-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:23.482683    7984 start.go:369] acquired machines lock for "false-384000" in 297.375µs
	I1201 10:16:23.482778    7984 start.go:93] Provisioning new machine with config: &{Name:false-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:23.483193    7984 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:23.506793    7984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:23.553290    7984 start.go:159] libmachine.API.Create for "false-384000" (driver="qemu2")
	I1201 10:16:23.553332    7984 client.go:168] LocalClient.Create starting
	I1201 10:16:23.553450    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:23.553509    7984 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:23.553529    7984 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:23.553595    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:23.553637    7984 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:23.553649    7984 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:23.554124    7984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:23.699531    7984 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:23.837515    7984 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:23.837521    7984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:23.837702    7984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:23.850016    7984 main.go:141] libmachine: STDOUT: 
	I1201 10:16:23.850095    7984 main.go:141] libmachine: STDERR: 
	I1201 10:16:23.850154    7984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2 +20000M
	I1201 10:16:23.860600    7984 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:23.860616    7984 main.go:141] libmachine: STDERR: 
	I1201 10:16:23.860630    7984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:23.860638    7984 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:23.860677    7984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b5:60:d8:39:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/false-384000/disk.qcow2
	I1201 10:16:23.862280    7984 main.go:141] libmachine: STDOUT: 
	I1201 10:16:23.862297    7984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:23.862311    7984 client.go:171] LocalClient.Create took 308.979167ms
	I1201 10:16:25.864445    7984 start.go:128] duration metric: createHost completed in 2.381266875s
	I1201 10:16:25.864539    7984 start.go:83] releasing machines lock for "false-384000", held for 2.381881541s
	W1201 10:16:25.864999    7984 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:25.883658    7984 out.go:177] 
	W1201 10:16:25.888616    7984 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:16:25.888653    7984 out.go:239] * 
	* 
	W1201 10:16:25.892037    7984 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:16:25.905626    7984 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.798931084s)

                                                
                                                
-- stdout --
	* [enable-default-cni-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-384000 in cluster enable-default-cni-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:28.249969    8096 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:28.250111    8096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:28.250113    8096 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:28.250116    8096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:28.250236    8096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:28.251324    8096 out.go:303] Setting JSON to false
	I1201 10:16:28.267285    8096 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2762,"bootTime":1701451826,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:28.267382    8096 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:28.273314    8096 out.go:177] * [enable-default-cni-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:28.286294    8096 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:28.282375    8096 notify.go:220] Checking for updates...
	I1201 10:16:28.294112    8096 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:28.302250    8096 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:28.309297    8096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:28.316213    8096 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:28.323363    8096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:28.327623    8096 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:28.327673    8096 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:28.332255    8096 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:28.339210    8096 start.go:298] selected driver: qemu2
	I1201 10:16:28.339216    8096 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:28.339223    8096 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:28.341829    8096 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:28.346336    8096 out.go:177] * Automatically selected the socket_vmnet network
	E1201 10:16:28.348085    8096 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1201 10:16:28.348097    8096 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:28.348152    8096 cni.go:84] Creating CNI manager for "bridge"
	I1201 10:16:28.348157    8096 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:16:28.348164    8096 start_flags.go:323] config:
	{Name:enable-default-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:28.353250    8096 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:28.360348    8096 out.go:177] * Starting control plane node enable-default-cni-384000 in cluster enable-default-cni-384000
	I1201 10:16:28.364261    8096 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:28.364293    8096 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:28.364303    8096 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:28.364409    8096 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:28.364416    8096 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:28.364520    8096 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/enable-default-cni-384000/config.json ...
	I1201 10:16:28.364535    8096 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/enable-default-cni-384000/config.json: {Name:mk6040be46a2ec2ee2c2433c856f67f916468c82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:28.364764    8096 start.go:365] acquiring machines lock for enable-default-cni-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:28.364803    8096 start.go:369] acquired machines lock for "enable-default-cni-384000" in 31.083µs
	I1201 10:16:28.364819    8096 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:28.364850    8096 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:28.371227    8096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:28.390567    8096 start.go:159] libmachine.API.Create for "enable-default-cni-384000" (driver="qemu2")
	I1201 10:16:28.390596    8096 client.go:168] LocalClient.Create starting
	I1201 10:16:28.390671    8096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:28.390704    8096 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:28.390717    8096 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:28.390761    8096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:28.390784    8096 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:28.390794    8096 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:28.391144    8096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:28.521890    8096 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:28.575407    8096 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:28.575413    8096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:28.575577    8096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:28.587447    8096 main.go:141] libmachine: STDOUT: 
	I1201 10:16:28.587466    8096 main.go:141] libmachine: STDERR: 
	I1201 10:16:28.587517    8096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2 +20000M
	I1201 10:16:28.597862    8096 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:28.597879    8096 main.go:141] libmachine: STDERR: 
	I1201 10:16:28.597896    8096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:28.597903    8096 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:28.597951    8096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:82:a5:be:94:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:28.599590    8096 main.go:141] libmachine: STDOUT: 
	I1201 10:16:28.599605    8096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:28.599625    8096 client.go:171] LocalClient.Create took 209.02825ms
	I1201 10:16:30.601771    8096 start.go:128] duration metric: createHost completed in 2.236951084s
	I1201 10:16:30.601832    8096 start.go:83] releasing machines lock for "enable-default-cni-384000", held for 2.237071709s
	W1201 10:16:30.602015    8096 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:30.616949    8096 out.go:177] * Deleting "enable-default-cni-384000" in qemu2 ...
	W1201 10:16:30.643837    8096 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:30.643905    8096 start.go:709] Will try again in 5 seconds ...
	I1201 10:16:35.646003    8096 start.go:365] acquiring machines lock for enable-default-cni-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:35.646443    8096 start.go:369] acquired machines lock for "enable-default-cni-384000" in 333.917µs
	I1201 10:16:35.646550    8096 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:35.646834    8096 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:35.663590    8096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:35.710161    8096 start.go:159] libmachine.API.Create for "enable-default-cni-384000" (driver="qemu2")
	I1201 10:16:35.710214    8096 client.go:168] LocalClient.Create starting
	I1201 10:16:35.710364    8096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:35.710437    8096 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:35.710477    8096 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:35.710540    8096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:35.710584    8096 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:35.710599    8096 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:35.711090    8096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:35.853895    8096 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:35.936448    8096 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:35.936454    8096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:35.936627    8096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:35.948709    8096 main.go:141] libmachine: STDOUT: 
	I1201 10:16:35.948727    8096 main.go:141] libmachine: STDERR: 
	I1201 10:16:35.948792    8096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2 +20000M
	I1201 10:16:35.959249    8096 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:35.959264    8096 main.go:141] libmachine: STDERR: 
	I1201 10:16:35.959276    8096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:35.959284    8096 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:35.959317    8096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:7a:70:7b:a9:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/enable-default-cni-384000/disk.qcow2
	I1201 10:16:35.960887    8096 main.go:141] libmachine: STDOUT: 
	I1201 10:16:35.960901    8096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:35.960913    8096 client.go:171] LocalClient.Create took 250.70075ms
	I1201 10:16:37.963057    8096 start.go:128] duration metric: createHost completed in 2.316218625s
	I1201 10:16:37.963122    8096 start.go:83] releasing machines lock for "enable-default-cni-384000", held for 2.316709166s
	W1201 10:16:37.963472    8096 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:37.984191    8096 out.go:177] 
	W1201 10:16:37.987433    8096 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:16:37.987524    8096 out.go:239] * 
	* 
	W1201 10:16:37.990169    8096 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:16:38.004090    8096 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.794251625s)

                                                
                                                
-- stdout --
	* [flannel-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-384000 in cluster flannel-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:40.394647    8210 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:40.394793    8210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:40.394796    8210 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:40.394799    8210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:40.394921    8210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:40.396000    8210 out.go:303] Setting JSON to false
	I1201 10:16:40.412249    8210 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2774,"bootTime":1701451826,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:40.412309    8210 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:40.419009    8210 out.go:177] * [flannel-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:40.432965    8210 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:40.428041    8210 notify.go:220] Checking for updates...
	I1201 10:16:40.438905    8210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:40.445942    8210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:40.452916    8210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:40.459919    8210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:40.466925    8210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:40.471304    8210 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:40.471350    8210 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:40.475983    8210 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:40.482917    8210 start.go:298] selected driver: qemu2
	I1201 10:16:40.482922    8210 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:40.482927    8210 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:40.485513    8210 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:40.488856    8210 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:40.491968    8210 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:40.492009    8210 cni.go:84] Creating CNI manager for "flannel"
	I1201 10:16:40.492015    8210 start_flags.go:318] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1201 10:16:40.492022    8210 start_flags.go:323] config:
	{Name:flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:40.497076    8210 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:40.503921    8210 out.go:177] * Starting control plane node flannel-384000 in cluster flannel-384000
	I1201 10:16:40.507995    8210 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:40.508024    8210 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:40.508036    8210 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:40.508122    8210 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:40.508128    8210 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:40.508215    8210 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/flannel-384000/config.json ...
	I1201 10:16:40.508236    8210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/flannel-384000/config.json: {Name:mk81797fc11aa88013c362f2891aa2308ddad47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:40.508649    8210 start.go:365] acquiring machines lock for flannel-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:40.508688    8210 start.go:369] acquired machines lock for "flannel-384000" in 33.167µs
	I1201 10:16:40.508712    8210 start.go:93] Provisioning new machine with config: &{Name:flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:40.508739    8210 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:40.513985    8210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:40.531748    8210 start.go:159] libmachine.API.Create for "flannel-384000" (driver="qemu2")
	I1201 10:16:40.531772    8210 client.go:168] LocalClient.Create starting
	I1201 10:16:40.531839    8210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:40.531868    8210 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:40.531879    8210 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:40.531917    8210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:40.531940    8210 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:40.531947    8210 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:40.532272    8210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:40.664351    8210 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:40.727880    8210 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:40.727886    8210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:40.728041    8210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:40.740015    8210 main.go:141] libmachine: STDOUT: 
	I1201 10:16:40.740035    8210 main.go:141] libmachine: STDERR: 
	I1201 10:16:40.740093    8210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2 +20000M
	I1201 10:16:40.750871    8210 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:40.750886    8210 main.go:141] libmachine: STDERR: 
	I1201 10:16:40.750904    8210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:40.750910    8210 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:40.750964    8210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:37:14:29:83:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:40.752585    8210 main.go:141] libmachine: STDOUT: 
	I1201 10:16:40.752601    8210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:40.752618    8210 client.go:171] LocalClient.Create took 220.844083ms
	I1201 10:16:42.754734    8210 start.go:128] duration metric: createHost completed in 2.246029959s
	I1201 10:16:42.754803    8210 start.go:83] releasing machines lock for "flannel-384000", held for 2.246158625s
	W1201 10:16:42.754859    8210 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:42.776118    8210 out.go:177] * Deleting "flannel-384000" in qemu2 ...
	W1201 10:16:42.803805    8210 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:42.803840    8210 start.go:709] Will try again in 5 seconds ...
	I1201 10:16:47.806027    8210 start.go:365] acquiring machines lock for flannel-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:47.806387    8210 start.go:369] acquired machines lock for "flannel-384000" in 274.833µs
	I1201 10:16:47.806538    8210 start.go:93] Provisioning new machine with config: &{Name:flannel-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:47.806788    8210 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:47.824441    8210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:47.872323    8210 start.go:159] libmachine.API.Create for "flannel-384000" (driver="qemu2")
	I1201 10:16:47.872384    8210 client.go:168] LocalClient.Create starting
	I1201 10:16:47.872563    8210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:47.872635    8210 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:47.872656    8210 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:47.872727    8210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:47.872774    8210 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:47.872788    8210 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:47.873279    8210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:48.016612    8210 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:48.065831    8210 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:48.065838    8210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:48.065990    8210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:48.077903    8210 main.go:141] libmachine: STDOUT: 
	I1201 10:16:48.077922    8210 main.go:141] libmachine: STDERR: 
	I1201 10:16:48.077976    8210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2 +20000M
	I1201 10:16:48.088580    8210 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:48.088599    8210 main.go:141] libmachine: STDERR: 
	I1201 10:16:48.088611    8210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:48.088616    8210 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:48.088647    8210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c0:e7:39:ff:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/flannel-384000/disk.qcow2
	I1201 10:16:48.090348    8210 main.go:141] libmachine: STDOUT: 
	I1201 10:16:48.090364    8210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:48.090389    8210 client.go:171] LocalClient.Create took 218.001416ms
	I1201 10:16:50.092643    8210 start.go:128] duration metric: createHost completed in 2.285865375s
	I1201 10:16:50.092714    8210 start.go:83] releasing machines lock for "flannel-384000", held for 2.286359875s
	W1201 10:16:50.093063    8210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:50.122445    8210 out.go:177] 
	W1201 10:16:50.126704    8210 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:16:50.126760    8210 out.go:239] * 
	* 
	W1201 10:16:50.128618    8210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:16:50.145604    8210 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.199380125s)

                                                
                                                
-- stdout --
	* [bridge-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-384000 in cluster bridge-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:52.708192    8328 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:52.708321    8328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:52.708325    8328 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:52.708327    8328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:52.708459    8328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:52.709601    8328 out.go:303] Setting JSON to false
	I1201 10:16:52.725577    8328 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2786,"bootTime":1701451826,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:52.725669    8328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:52.731146    8328 out.go:177] * [bridge-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:52.744043    8328 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:52.739248    8328 notify.go:220] Checking for updates...
	I1201 10:16:52.751071    8328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:52.759089    8328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:52.763002    8328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:52.766038    8328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:52.773040    8328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:52.777444    8328 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:52.777495    8328 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:52.780978    8328 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:52.788039    8328 start.go:298] selected driver: qemu2
	I1201 10:16:52.788044    8328 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:52.788050    8328 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:52.790617    8328 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:52.794051    8328 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:52.798167    8328 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:52.798216    8328 cni.go:84] Creating CNI manager for "bridge"
	I1201 10:16:52.798222    8328 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:16:52.798227    8328 start_flags.go:323] config:
	{Name:bridge-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:52.803081    8328 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:52.811087    8328 out.go:177] * Starting control plane node bridge-384000 in cluster bridge-384000
	I1201 10:16:52.814243    8328 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:52.814274    8328 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:52.814287    8328 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:52.814355    8328 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:52.814362    8328 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:52.814453    8328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/bridge-384000/config.json ...
	I1201 10:16:52.814466    8328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/bridge-384000/config.json: {Name:mkc202b9855d588672ccd2a35a02a4d970e678dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:52.814700    8328 start.go:365] acquiring machines lock for bridge-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:52.814738    8328 start.go:369] acquired machines lock for "bridge-384000" in 29.584µs
	I1201 10:16:52.814751    8328 start.go:93] Provisioning new machine with config: &{Name:bridge-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:52.814783    8328 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:52.825944    8328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:52.845198    8328 start.go:159] libmachine.API.Create for "bridge-384000" (driver="qemu2")
	I1201 10:16:52.845231    8328 client.go:168] LocalClient.Create starting
	I1201 10:16:52.845296    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:52.845326    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:52.845340    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:52.845380    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:52.845410    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:52.845418    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:52.845805    8328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:52.976661    8328 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:53.406828    8328 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:53.406846    8328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:53.407076    8328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.419781    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:16:53.419800    8328 main.go:141] libmachine: STDERR: 
	I1201 10:16:53.419862    8328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2 +20000M
	I1201 10:16:53.430440    8328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:53.430461    8328 main.go:141] libmachine: STDERR: 
	I1201 10:16:53.430480    8328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.430486    8328 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:53.430521    8328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:32:80:f0:35:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.432229    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:16:53.432245    8328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:53.432267    8328 client.go:171] LocalClient.Create took 587.04375ms
	I1201 10:16:55.434397    8328 start.go:128] duration metric: createHost completed in 2.619654583s
	I1201 10:16:55.434446    8328 start.go:83] releasing machines lock for "bridge-384000", held for 2.619758167s
	W1201 10:16:55.434519    8328 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:55.453764    8328 out.go:177] * Deleting "bridge-384000" in qemu2 ...
	W1201 10:16:55.481677    8328 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:55.481716    8328 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:00.483863    8328 start.go:365] acquiring machines lock for bridge-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:00.484343    8328 start.go:369] acquired machines lock for "bridge-384000" in 345.292µs
	I1201 10:17:00.484492    8328 start.go:93] Provisioning new machine with config: &{Name:bridge-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:00.484774    8328 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:00.496575    8328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:17:00.544501    8328 start.go:159] libmachine.API.Create for "bridge-384000" (driver="qemu2")
	I1201 10:17:00.544543    8328 client.go:168] LocalClient.Create starting
	I1201 10:17:00.544673    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:00.544729    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:00.544754    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:00.544822    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:00.544863    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:00.544878    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:00.545368    8328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:00.685933    8328 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:00.802855    8328 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:00.802862    8328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:00.803069    8328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:17:00.815182    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:17:00.815197    8328 main.go:141] libmachine: STDERR: 
	I1201 10:17:00.815256    8328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2 +20000M
	I1201 10:17:00.825726    8328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:00.825741    8328 main.go:141] libmachine: STDERR: 
	I1201 10:17:00.825762    8328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:17:00.825767    8328 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:00.825805    8328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:97:6e:3e:7d:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:17:00.827469    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:17:00.827484    8328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:00.827506    8328 client.go:171] LocalClient.Create took 282.96325ms
	I1201 10:17:02.829757    8328 start.go:128] duration metric: createHost completed in 2.344995834s
	I1201 10:17:02.829836    8328 start.go:83] releasing machines lock for "bridge-384000", held for 2.345526541s
	W1201 10:17:02.830274    8328 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:02.842912    8328 out.go:177] 
	W1201 10:17:02.848010    8328 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:02.848046    8328 out.go:239] * 
	* 
	W1201 10:17:02.850525    8328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:02.861761    8328 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe: permission denied (7.04075ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe: permission denied (6.6375ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe start -p stopped-upgrade-338000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe: permission denied (6.401125ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1698705105.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-338000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-338000: exit status 85 (157.132ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000 sudo cat                | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000 sudo cat                | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000 sudo cat                | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-384000                         | enable-default-cni-384000 | jenkins | v1.32.0 | 01 Dec 23 10:16 PST | 01 Dec 23 10:16 PST |
	| start   | -p flannel-384000                                    | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=qemu2                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo crictl                        | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo crictl                        | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo find                          | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo ip a s                        | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	| ssh     | -p flannel-384000 sudo ip r s                        | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | iptables -t nat -L -n -v                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /run/flannel/subnet.env                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo docker                        | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo cat                           | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo                               | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo find                          | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-384000 sudo crio                          | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-384000                                    | flannel-384000            | jenkins | v1.32.0 | 01 Dec 23 10:16 PST | 01 Dec 23 10:16 PST |
	| start   | -p bridge-384000 --memory=3072                       | bridge-384000             | jenkins | v1.32.0 | 01 Dec 23 10:16 PST |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/01 10:16:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 10:16:52.708192    8328 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:52.708321    8328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:52.708325    8328 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:52.708327    8328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:52.708459    8328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:52.709601    8328 out.go:303] Setting JSON to false
	I1201 10:16:52.725577    8328 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2786,"bootTime":1701451826,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:52.725669    8328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:52.731146    8328 out.go:177] * [bridge-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:52.744043    8328 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:52.739248    8328 notify.go:220] Checking for updates...
	I1201 10:16:52.751071    8328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:52.759089    8328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:52.763002    8328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:52.766038    8328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:52.773040    8328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:52.777444    8328 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:52.777495    8328 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:52.780978    8328 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:52.788039    8328 start.go:298] selected driver: qemu2
	I1201 10:16:52.788044    8328 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:52.788050    8328 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:52.790617    8328 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:52.794051    8328 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:52.798167    8328 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:52.798216    8328 cni.go:84] Creating CNI manager for "bridge"
	I1201 10:16:52.798222    8328 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:16:52.798227    8328 start_flags.go:323] config:
	{Name:bridge-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:52.803081    8328 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:52.811087    8328 out.go:177] * Starting control plane node bridge-384000 in cluster bridge-384000
	I1201 10:16:52.814243    8328 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:52.814274    8328 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:52.814287    8328 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:52.814355    8328 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:52.814362    8328 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:52.814453    8328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/bridge-384000/config.json ...
	I1201 10:16:52.814466    8328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/bridge-384000/config.json: {Name:mkc202b9855d588672ccd2a35a02a4d970e678dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:52.814700    8328 start.go:365] acquiring machines lock for bridge-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:52.814738    8328 start.go:369] acquired machines lock for "bridge-384000" in 29.584µs
	I1201 10:16:52.814751    8328 start.go:93] Provisioning new machine with config: &{Name:bridge-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:52.814783    8328 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:52.825944    8328 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:52.845198    8328 start.go:159] libmachine.API.Create for "bridge-384000" (driver="qemu2")
	I1201 10:16:52.845231    8328 client.go:168] LocalClient.Create starting
	I1201 10:16:52.845296    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:52.845326    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:52.845340    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:52.845380    8328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:52.845410    8328 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:52.845418    8328 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:52.845805    8328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:52.976661    8328 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:53.406828    8328 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:53.406846    8328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:53.407076    8328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.419781    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:16:53.419800    8328 main.go:141] libmachine: STDERR: 
	I1201 10:16:53.419862    8328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2 +20000M
	I1201 10:16:53.430440    8328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:53.430461    8328 main.go:141] libmachine: STDERR: 
	I1201 10:16:53.430480    8328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.430486    8328 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:53.430521    8328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:32:80:f0:35:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/bridge-384000/disk.qcow2
	I1201 10:16:53.432229    8328 main.go:141] libmachine: STDOUT: 
	I1201 10:16:53.432245    8328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:53.432267    8328 client.go:171] LocalClient.Create took 587.04375ms
	I1201 10:16:55.434397    8328 start.go:128] duration metric: createHost completed in 2.619654583s
	I1201 10:16:55.434446    8328 start.go:83] releasing machines lock for "bridge-384000", held for 2.619758167s
	W1201 10:16:55.434519    8328 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:16:55.453764    8328 out.go:177] * Deleting "bridge-384000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-338000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-338000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-384000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.950878792s)

                                                
                                                
-- stdout --
	* [kubenet-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-384000 in cluster kubenet-384000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:16:57.715310    8357 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:16:57.715445    8357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:57.715448    8357 out.go:309] Setting ErrFile to fd 2...
	I1201 10:16:57.715451    8357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:16:57.715566    8357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:16:57.716565    8357 out.go:303] Setting JSON to false
	I1201 10:16:57.732447    8357 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2791,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:16:57.732512    8357 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:16:57.736012    8357 out.go:177] * [kubenet-384000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:16:57.747007    8357 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:16:57.742705    8357 notify.go:220] Checking for updates...
	I1201 10:16:57.754892    8357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:16:57.763016    8357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:16:57.766943    8357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:16:57.774812    8357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:16:57.782959    8357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:16:57.784690    8357 config.go:182] Loaded profile config "bridge-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:57.784759    8357 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:16:57.784802    8357 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:16:57.789046    8357 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:16:57.794959    8357 start.go:298] selected driver: qemu2
	I1201 10:16:57.794965    8357 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:16:57.794972    8357 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:16:57.797629    8357 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:16:57.800978    8357 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:16:57.804072    8357 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:16:57.804106    8357 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1201 10:16:57.804111    8357 start_flags.go:323] config:
	{Name:kubenet-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:16:57.808980    8357 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:16:57.817014    8357 out.go:177] * Starting control plane node kubenet-384000 in cluster kubenet-384000
	I1201 10:16:57.819905    8357 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:16:57.819937    8357 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:16:57.819947    8357 cache.go:56] Caching tarball of preloaded images
	I1201 10:16:57.820026    8357 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:16:57.820034    8357 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:16:57.820132    8357 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kubenet-384000/config.json ...
	I1201 10:16:57.820146    8357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/kubenet-384000/config.json: {Name:mk4e6e61b9543ee5beaad3ef847daeb65c03db27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:16:57.820439    8357 start.go:365] acquiring machines lock for kubenet-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:16:57.820476    8357 start.go:369] acquired machines lock for "kubenet-384000" in 30.25µs
	I1201 10:16:57.820492    8357 start.go:93] Provisioning new machine with config: &{Name:kubenet-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:16:57.820541    8357 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:16:57.827840    8357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:16:57.846405    8357 start.go:159] libmachine.API.Create for "kubenet-384000" (driver="qemu2")
	I1201 10:16:57.846428    8357 client.go:168] LocalClient.Create starting
	I1201 10:16:57.846501    8357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:16:57.846536    8357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:57.846547    8357 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:57.846596    8357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:16:57.846620    8357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:16:57.846630    8357 main.go:141] libmachine: Parsing certificate...
	I1201 10:16:57.846993    8357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:16:57.977075    8357 main.go:141] libmachine: Creating SSH key...
	I1201 10:16:58.137841    8357 main.go:141] libmachine: Creating Disk image...
	I1201 10:16:58.137847    8357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:16:58.138036    8357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:16:58.150757    8357 main.go:141] libmachine: STDOUT: 
	I1201 10:16:58.150779    8357 main.go:141] libmachine: STDERR: 
	I1201 10:16:58.150838    8357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2 +20000M
	I1201 10:16:58.161363    8357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:16:58.161397    8357 main.go:141] libmachine: STDERR: 
	I1201 10:16:58.161414    8357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:16:58.161419    8357 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:16:58.161452    8357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:37:e8:ca:e4:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:16:58.163115    8357 main.go:141] libmachine: STDOUT: 
	I1201 10:16:58.163131    8357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:16:58.163150    8357 client.go:171] LocalClient.Create took 316.721625ms
	I1201 10:17:00.165326    8357 start.go:128] duration metric: createHost completed in 2.344812292s
	I1201 10:17:00.165426    8357 start.go:83] releasing machines lock for "kubenet-384000", held for 2.344995458s
	W1201 10:17:00.165489    8357 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:00.178744    8357 out.go:177] * Deleting "kubenet-384000" in qemu2 ...
	W1201 10:17:00.207012    8357 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:00.207065    8357 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:05.209057    8357 start.go:365] acquiring machines lock for kubenet-384000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:05.209162    8357 start.go:369] acquired machines lock for "kubenet-384000" in 68.333µs
	I1201 10:17:05.209181    8357 start.go:93] Provisioning new machine with config: &{Name:kubenet-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:05.209228    8357 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:05.218062    8357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 10:17:05.232656    8357 start.go:159] libmachine.API.Create for "kubenet-384000" (driver="qemu2")
	I1201 10:17:05.232686    8357 client.go:168] LocalClient.Create starting
	I1201 10:17:05.232763    8357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:05.232791    8357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:05.232801    8357 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:05.232835    8357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:05.232856    8357 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:05.232862    8357 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:05.233132    8357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:05.459191    8357 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:05.555313    8357 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:05.555323    8357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:05.555496    8357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:17:05.567715    8357 main.go:141] libmachine: STDOUT: 
	I1201 10:17:05.567732    8357 main.go:141] libmachine: STDERR: 
	I1201 10:17:05.567812    8357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2 +20000M
	I1201 10:17:05.578228    8357 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:05.578245    8357 main.go:141] libmachine: STDERR: 
	I1201 10:17:05.578262    8357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:17:05.578268    8357 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:05.578321    8357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:82:5a:a7:b4:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/kubenet-384000/disk.qcow2
	I1201 10:17:05.580092    8357 main.go:141] libmachine: STDOUT: 
	I1201 10:17:05.580121    8357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:05.580134    8357 client.go:171] LocalClient.Create took 347.450958ms
	I1201 10:17:07.582263    8357 start.go:128] duration metric: createHost completed in 2.373069625s
	I1201 10:17:07.582335    8357 start.go:83] releasing machines lock for "kubenet-384000", held for 2.373219541s
	W1201 10:17:07.582712    8357 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:07.608525    8357 out.go:177] 
	W1201 10:17:07.611556    8357 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:07.611599    8357 out.go:239] * 
	* 
	W1201 10:17:07.613505    8357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:07.624308    8357 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (12.107302s)

                                                
                                                
-- stdout --
	* [old-k8s-version-277000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-277000 in cluster old-k8s-version-277000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-277000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:05.210112    8470 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:05.214487    8470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:05.214491    8470 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:05.214494    8470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:05.214633    8470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:05.218267    8470 out.go:303] Setting JSON to false
	I1201 10:17:05.235417    8470 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2799,"bootTime":1701451826,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:05.235494    8470 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:05.241171    8470 out.go:177] * [old-k8s-version-277000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:05.254950    8470 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:05.251226    8470 notify.go:220] Checking for updates...
	I1201 10:17:05.263108    8470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:05.270063    8470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:05.278063    8470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:05.284673    8470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:05.295053    8470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:05.299561    8470 config.go:182] Loaded profile config "kubenet-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:05.299635    8470 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:05.299682    8470 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:05.304043    8470 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:17:05.311977    8470 start.go:298] selected driver: qemu2
	I1201 10:17:05.311982    8470 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:17:05.311989    8470 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:05.314932    8470 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:17:05.318125    8470 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:17:05.322184    8470 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:05.322273    8470 cni.go:84] Creating CNI manager for ""
	I1201 10:17:05.322283    8470 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:17:05.322289    8470 start_flags.go:323] config:
	{Name:old-k8s-version-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:05.327767    8470 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:05.337103    8470 out.go:177] * Starting control plane node old-k8s-version-277000 in cluster old-k8s-version-277000
	I1201 10:17:05.341104    8470 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:17:05.341144    8470 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:17:05.341155    8470 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:05.341268    8470 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:05.341275    8470 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1201 10:17:05.341365    8470 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/old-k8s-version-277000/config.json ...
	I1201 10:17:05.341387    8470 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/old-k8s-version-277000/config.json: {Name:mk01b68c66e7cf68bb35387b01b1832cd6f15902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:17:05.341874    8470 start.go:365] acquiring machines lock for old-k8s-version-277000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:07.582467    8470 start.go:369] acquired machines lock for "old-k8s-version-277000" in 2.240615125s
	I1201 10:17:07.582682    8470 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:07.583042    8470 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:07.608525    8470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:07.657379    8470 start.go:159] libmachine.API.Create for "old-k8s-version-277000" (driver="qemu2")
	I1201 10:17:07.657436    8470 client.go:168] LocalClient.Create starting
	I1201 10:17:07.657542    8470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:07.657579    8470 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:07.657599    8470 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:07.657679    8470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:07.657709    8470 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:07.657722    8470 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:07.658341    8470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:07.805858    8470 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:07.882566    8470 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:07.882575    8470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:07.882792    8470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:07.895542    8470 main.go:141] libmachine: STDOUT: 
	I1201 10:17:07.895565    8470 main.go:141] libmachine: STDERR: 
	I1201 10:17:07.895645    8470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2 +20000M
	I1201 10:17:07.907737    8470 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:07.907764    8470 main.go:141] libmachine: STDERR: 
	I1201 10:17:07.907808    8470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:07.907815    8470 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:07.907853    8470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:01:30:db:21:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:07.909925    8470 main.go:141] libmachine: STDOUT: 
	I1201 10:17:07.909942    8470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:07.909964    8470 client.go:171] LocalClient.Create took 252.52575ms
	I1201 10:17:09.910866    8470 start.go:128] duration metric: createHost completed in 2.327853208s
	I1201 10:17:09.910882    8470 start.go:83] releasing machines lock for "old-k8s-version-277000", held for 2.328439459s
	W1201 10:17:09.910896    8470 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:09.919167    8470 out.go:177] * Deleting "old-k8s-version-277000" in qemu2 ...
	W1201 10:17:09.933794    8470 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:09.933802    8470 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:14.934026    8470 start.go:365] acquiring machines lock for old-k8s-version-277000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:14.934339    8470 start.go:369] acquired machines lock for "old-k8s-version-277000" in 234.416µs
	I1201 10:17:14.934437    8470 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:14.934611    8470 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:14.952337    8470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:14.998904    8470 start.go:159] libmachine.API.Create for "old-k8s-version-277000" (driver="qemu2")
	I1201 10:17:14.998969    8470 client.go:168] LocalClient.Create starting
	I1201 10:17:14.999090    8470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:14.999161    8470 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:14.999180    8470 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:14.999245    8470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:14.999294    8470 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:14.999308    8470 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:15.001792    8470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:15.155060    8470 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:15.193795    8470 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:15.193801    8470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:15.193960    8470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:15.209521    8470 main.go:141] libmachine: STDOUT: 
	I1201 10:17:15.209545    8470 main.go:141] libmachine: STDERR: 
	I1201 10:17:15.209615    8470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2 +20000M
	I1201 10:17:15.225974    8470 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:15.225996    8470 main.go:141] libmachine: STDERR: 
	I1201 10:17:15.226012    8470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:15.226020    8470 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:15.226047    8470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c1:e6:8c:43:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:15.227824    8470 main.go:141] libmachine: STDOUT: 
	I1201 10:17:15.227839    8470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:15.227853    8470 client.go:171] LocalClient.Create took 228.8825ms
	I1201 10:17:17.229976    8470 start.go:128] duration metric: createHost completed in 2.2953895s
	I1201 10:17:17.230035    8470 start.go:83] releasing machines lock for "old-k8s-version-277000", held for 2.295727042s
	W1201 10:17:17.230420    8470 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:17.251044    8470 out.go:177] 
	W1201 10:17:17.256062    8470 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:17.256096    8470 out.go:239] * 
	* 
	W1201 10:17:17.258514    8470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:17.270957    8470 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (66.535875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (9.873796792s)

                                                
                                                
-- stdout --
	* [no-preload-322000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-322000 in cluster no-preload-322000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:09.995662    8580 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:09.995805    8580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:09.995808    8580 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:09.995811    8580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:09.995945    8580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:09.997052    8580 out.go:303] Setting JSON to false
	I1201 10:17:10.013078    8580 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2804,"bootTime":1701451826,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:10.013180    8580 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:10.018199    8580 out.go:177] * [no-preload-322000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:10.030153    8580 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:10.026201    8580 notify.go:220] Checking for updates...
	I1201 10:17:10.037151    8580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:10.044105    8580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:10.051929    8580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:10.056077    8580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:10.071030    8580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:10.075509    8580 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:10.075592    8580 config.go:182] Loaded profile config "old-k8s-version-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1201 10:17:10.075637    8580 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:10.080187    8580 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:17:10.087088    8580 start.go:298] selected driver: qemu2
	I1201 10:17:10.087093    8580 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:17:10.087099    8580 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:10.089631    8580 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:17:10.093100    8580 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:17:10.097077    8580 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:10.097125    8580 cni.go:84] Creating CNI manager for ""
	I1201 10:17:10.097133    8580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:10.097138    8580 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:17:10.097146    8580 start_flags.go:323] config:
	{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock
: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:10.102225    8580 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.108114    8580 out.go:177] * Starting control plane node no-preload-322000 in cluster no-preload-322000
	I1201 10:17:10.111099    8580 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:17:10.111180    8580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/no-preload-322000/config.json ...
	I1201 10:17:10.111220    8580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/no-preload-322000/config.json: {Name:mk7dcea37af19e7ca262484d3532172d39553e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:17:10.111210    8580 cache.go:107] acquiring lock: {Name:mk77e14012bc5feecacbad696729eadaa024d606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111252    8580 cache.go:107] acquiring lock: {Name:mk6ec5c8e4edabdfc3c09d4bd55c80a05919027e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111262    8580 cache.go:107] acquiring lock: {Name:mk7b2713d5a7627eed464aad8e153292eed05898 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111321    8580 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 10:17:10.111329    8580 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.584µs
	I1201 10:17:10.111338    8580 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 10:17:10.111346    8580 cache.go:107] acquiring lock: {Name:mk96db6c3cda4de37492a4e75bc221be04111cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111228    8580 cache.go:107] acquiring lock: {Name:mkbd589e9428d368ad6d0ef8873b50e60c3e0495 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111390    8580 cache.go:107] acquiring lock: {Name:mka501a835474b1504eb101a162349567a33bc80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111388    8580 cache.go:107] acquiring lock: {Name:mk482905ad960d096e1b713a32e05ed32c6d3445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111520    8580 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1201 10:17:10.111592    8580 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1201 10:17:10.111612    8580 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1201 10:17:10.111672    8580 start.go:365] acquiring machines lock for no-preload-322000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:10.111704    8580 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1201 10:17:10.111704    8580 cache.go:107] acquiring lock: {Name:mk528e8c6542c8253276cdca2139f40214aa1f35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:10.111747    8580 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1201 10:17:10.111780    8580 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1201 10:17:10.111785    8580 start.go:369] acquired machines lock for "no-preload-322000" in 97.166µs
	I1201 10:17:10.111818    8580 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1201 10:17:10.111812    8580 start.go:93] Provisioning new machine with config: &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:10.111860    8580 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:10.120116    8580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:10.124333    8580 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1201 10:17:10.124489    8580 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1201 10:17:10.127858    8580 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1201 10:17:10.127914    8580 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1201 10:17:10.127943    8580 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1201 10:17:10.127997    8580 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1201 10:17:10.128049    8580 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1201 10:17:10.139566    8580 start.go:159] libmachine.API.Create for "no-preload-322000" (driver="qemu2")
	I1201 10:17:10.139589    8580 client.go:168] LocalClient.Create starting
	I1201 10:17:10.139688    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:10.139721    8580 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:10.139730    8580 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:10.139768    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:10.139793    8580 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:10.139801    8580 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:10.140190    8580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:10.298908    8580 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:10.353144    8580 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:10.353158    8580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:10.353354    8580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:10.366259    8580 main.go:141] libmachine: STDOUT: 
	I1201 10:17:10.366280    8580 main.go:141] libmachine: STDERR: 
	I1201 10:17:10.366344    8580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2 +20000M
	I1201 10:17:10.378386    8580 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:10.378404    8580 main.go:141] libmachine: STDERR: 
	I1201 10:17:10.378417    8580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:10.378422    8580 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:10.378450    8580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6e:e1:f7:cb:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:10.380394    8580 main.go:141] libmachine: STDOUT: 
	I1201 10:17:10.380419    8580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:10.380438    8580 client.go:171] LocalClient.Create took 240.847709ms
	I1201 10:17:10.557228    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1201 10:17:10.560626    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I1201 10:17:10.565725    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I1201 10:17:10.572675    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1201 10:17:10.600506    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1201 10:17:10.600681    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1201 10:17:10.608782    8580 cache.go:162] opening:  /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1201 10:17:10.741732    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1201 10:17:10.741776    8580 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 630.101791ms
	I1201 10:17:10.741806    8580 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1201 10:17:12.380669    8580 start.go:128] duration metric: createHost completed in 2.268816292s
	I1201 10:17:12.380738    8580 start.go:83] releasing machines lock for "no-preload-322000", held for 2.268997875s
	W1201 10:17:12.380801    8580 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:12.404975    8580 out.go:177] * Deleting "no-preload-322000" in qemu2 ...
	W1201 10:17:12.432341    8580 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:12.432375    8580 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:13.817330    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1201 10:17:13.817393    8580 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 3.706227625s
	I1201 10:17:13.817424    8580 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1201 10:17:14.153682    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1201 10:17:14.153735    8580 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 4.042472s
	I1201 10:17:14.153790    8580 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1201 10:17:14.477903    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1201 10:17:14.477961    8580 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 4.366807166s
	I1201 10:17:14.478007    8580 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1201 10:17:14.619313    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1201 10:17:14.619362    8580 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.508116416s
	I1201 10:17:14.619393    8580 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1201 10:17:15.186354    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1201 10:17:15.186363    8580 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 5.07527325s
	I1201 10:17:15.186371    8580 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1201 10:17:17.432422    8580 start.go:365] acquiring machines lock for no-preload-322000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:17.432494    8580 start.go:369] acquired machines lock for "no-preload-322000" in 57.333µs
	I1201 10:17:17.432520    8580 start.go:93] Provisioning new machine with config: &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:17.432561    8580 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:17.441254    8580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:17.456414    8580 start.go:159] libmachine.API.Create for "no-preload-322000" (driver="qemu2")
	I1201 10:17:17.456454    8580 client.go:168] LocalClient.Create starting
	I1201 10:17:17.456521    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:17.456549    8580 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:17.456565    8580 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:17.456607    8580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:17.456622    8580 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:17.456630    8580 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:17.456905    8580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:17.657954    8580 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:17.735149    8580 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:17.735157    8580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:17.735330    8580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:17.756156    8580 main.go:141] libmachine: STDOUT: 
	I1201 10:17:17.756177    8580 main.go:141] libmachine: STDERR: 
	I1201 10:17:17.756238    8580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2 +20000M
	I1201 10:17:17.767789    8580 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:17.767818    8580 main.go:141] libmachine: STDERR: 
	I1201 10:17:17.767833    8580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:17.767839    8580 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:17.767882    8580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c3:01:62:32:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:17.769906    8580 main.go:141] libmachine: STDOUT: 
	I1201 10:17:17.769938    8580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:17.769951    8580 client.go:171] LocalClient.Create took 313.500791ms
	I1201 10:17:18.103588    8580 cache.go:157] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I1201 10:17:18.103655    8580 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 7.99247225s
	I1201 10:17:18.103692    8580 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1201 10:17:18.103747    8580 cache.go:87] Successfully saved all images to host disk.
	I1201 10:17:19.772104    8580 start.go:128] duration metric: createHost completed in 2.339576625s
	I1201 10:17:19.772182    8580 start.go:83] releasing machines lock for "no-preload-322000", held for 2.339729625s
	W1201 10:17:19.772456    8580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:19.795985    8580 out.go:177] 
	W1201 10:17:19.810116    8580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:19.810154    8580 out.go:239] * 
	* 
	W1201 10:17:19.812783    8580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:19.820032    8580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (64.941083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-277000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-277000 create -f testdata/busybox.yaml: exit status 1 (28.903458ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-277000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (30.700208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (31.285625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-277000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-277000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-277000 describe deploy/metrics-server -n kube-system: exit status 1 (27.949ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-277000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-277000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (34.902542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (7.132075041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-277000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-277000 in cluster old-k8s-version-277000
	* Restarting existing qemu2 VM for "old-k8s-version-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-277000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:17.794674    8645 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:17.794825    8645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:17.794829    8645 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:17.794831    8645 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:17.794948    8645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:17.795995    8645 out.go:303] Setting JSON to false
	I1201 10:17:17.812159    8645 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2811,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:17.812250    8645 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:17.817252    8645 out.go:177] * [old-k8s-version-277000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:17.830283    8645 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:17.825256    8645 notify.go:220] Checking for updates...
	I1201 10:17:17.838250    8645 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:17.846237    8645 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:17.850283    8645 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:17.857231    8645 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:17.865292    8645 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:17.868510    8645 config.go:182] Loaded profile config "old-k8s-version-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1201 10:17:17.873280    8645 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1201 10:17:17.877203    8645 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:17.881296    8645 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:17:17.885117    8645 start.go:298] selected driver: qemu2
	I1201 10:17:17.885123    8645 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:17.885202    8645 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:17.887830    8645 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:17.887884    8645 cni.go:84] Creating CNI manager for ""
	I1201 10:17:17.887892    8645 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:17:17.887896    8645 start_flags.go:323] config:
	{Name:old-k8s-version-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-277000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:17.892425    8645 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:17.898803    8645 out.go:177] * Starting control plane node old-k8s-version-277000 in cluster old-k8s-version-277000
	I1201 10:17:17.903278    8645 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:17:17.903309    8645 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:17:17.903321    8645 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:17.903389    8645 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:17.903396    8645 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1201 10:17:17.903462    8645 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/old-k8s-version-277000/config.json ...
	I1201 10:17:17.903972    8645 start.go:365] acquiring machines lock for old-k8s-version-277000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:19.772339    8645 start.go:369] acquired machines lock for "old-k8s-version-277000" in 1.868331709s
	I1201 10:17:19.772494    8645 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:19.772526    8645 fix.go:54] fixHost starting: 
	I1201 10:17:19.773212    8645 fix.go:102] recreateIfNeeded on old-k8s-version-277000: state=Stopped err=<nil>
	W1201 10:17:19.773260    8645 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:19.795985    8645 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-277000" ...
	I1201 10:17:19.806402    8645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c1:e6:8c:43:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:19.817192    8645 main.go:141] libmachine: STDOUT: 
	I1201 10:17:19.817292    8645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:19.817391    8645 fix.go:56] fixHost completed within 44.873042ms
	I1201 10:17:19.817408    8645 start.go:83] releasing machines lock for "old-k8s-version-277000", held for 45.033375ms
	W1201 10:17:19.817457    8645 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:19.817613    8645 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:19.817630    8645 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:24.819739    8645 start.go:365] acquiring machines lock for old-k8s-version-277000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:24.820112    8645 start.go:369] acquired machines lock for "old-k8s-version-277000" in 263.584µs
	I1201 10:17:24.820223    8645 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:24.820242    8645 fix.go:54] fixHost starting: 
	I1201 10:17:24.820940    8645 fix.go:102] recreateIfNeeded on old-k8s-version-277000: state=Stopped err=<nil>
	W1201 10:17:24.820972    8645 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:24.837601    8645 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-277000" ...
	I1201 10:17:24.848565    8645 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c1:e6:8c:43:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/old-k8s-version-277000/disk.qcow2
	I1201 10:17:24.857256    8645 main.go:141] libmachine: STDOUT: 
	I1201 10:17:24.857346    8645 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:24.857434    8645 fix.go:56] fixHost completed within 37.188083ms
	I1201 10:17:24.857457    8645 start.go:83] releasing machines lock for "old-k8s-version-277000", held for 37.320417ms
	W1201 10:17:24.857725    8645 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-277000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-277000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:24.868510    8645 out.go:177] 
	W1201 10:17:24.872551    8645 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:24.872592    8645 out.go:239] * 
	* 
	W1201 10:17:24.875207    8645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:24.887492    8645 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-277000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (67.715916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-322000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-322000 create -f testdata/busybox.yaml: exit status 1 (27.65375ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-322000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (30.298292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (30.89925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-322000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system: exit status 1 (24.871375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-322000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (31.631583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (5.180882834s)

                                                
                                                
-- stdout --
	* [no-preload-322000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-322000 in cluster no-preload-322000
	* Restarting existing qemu2 VM for "no-preload-322000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-322000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:20.304935    8671 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:20.305066    8671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:20.305070    8671 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:20.305072    8671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:20.305195    8671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:20.306277    8671 out.go:303] Setting JSON to false
	I1201 10:17:20.322180    8671 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2814,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:20.322264    8671 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:20.325413    8671 out.go:177] * [no-preload-322000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:20.335342    8671 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:20.331477    8671 notify.go:220] Checking for updates...
	I1201 10:17:20.343313    8671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:20.351411    8671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:20.355357    8671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:20.358411    8671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:20.361413    8671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:20.364747    8671 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1201 10:17:20.365018    8671 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:20.369429    8671 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:17:20.380379    8671 start.go:298] selected driver: qemu2
	I1201 10:17:20.380385    8671 start.go:902] validating driver "qemu2" against &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:20.380441    8671 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:20.382964    8671 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:20.382994    8671 cni.go:84] Creating CNI manager for ""
	I1201 10:17:20.383004    8671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:20.383013    8671 start_flags.go:323] config:
	{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-322000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:20.387770    8671 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.395368    8671 out.go:177] * Starting control plane node no-preload-322000 in cluster no-preload-322000
	I1201 10:17:20.399505    8671 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:17:20.399574    8671 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/no-preload-322000/config.json ...
	I1201 10:17:20.399610    8671 cache.go:107] acquiring lock: {Name:mka501a835474b1504eb101a162349567a33bc80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399617    8671 cache.go:107] acquiring lock: {Name:mkbd589e9428d368ad6d0ef8873b50e60c3e0495 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399629    8671 cache.go:107] acquiring lock: {Name:mk7b2713d5a7627eed464aad8e153292eed05898 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399685    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1201 10:17:20.399692    8671 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 63.375µs
	I1201 10:17:20.399692    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1201 10:17:20.399699    8671 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1201 10:17:20.399694    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1201 10:17:20.399704    8671 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 97.75µs
	I1201 10:17:20.399708    8671 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1201 10:17:20.399701    8671 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 97.208µs
	I1201 10:17:20.399725    8671 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1201 10:17:20.399718    8671 cache.go:107] acquiring lock: {Name:mk528e8c6542c8253276cdca2139f40214aa1f35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399610    8671 cache.go:107] acquiring lock: {Name:mk77e14012bc5feecacbad696729eadaa024d606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399700    8671 cache.go:107] acquiring lock: {Name:mk96db6c3cda4de37492a4e75bc221be04111cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399768    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1201 10:17:20.399763    8671 cache.go:107] acquiring lock: {Name:mk6ec5c8e4edabdfc3c09d4bd55c80a05919027e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399772    8671 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 54.917µs
	I1201 10:17:20.399776    8671 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1201 10:17:20.399797    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1201 10:17:20.399804    8671 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 104.583µs
	I1201 10:17:20.399809    8671 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1201 10:17:20.399827    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 10:17:20.399829    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1201 10:17:20.399831    8671 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 225.583µs
	I1201 10:17:20.399832    8671 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 85.958µs
	I1201 10:17:20.399837    8671 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 10:17:20.399838    8671 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1201 10:17:20.399855    8671 cache.go:107] acquiring lock: {Name:mk482905ad960d096e1b713a32e05ed32c6d3445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:20.399908    8671 cache.go:115] /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I1201 10:17:20.399912    8671 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 191.75µs
	I1201 10:17:20.399920    8671 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1201 10:17:20.399930    8671 cache.go:87] Successfully saved all images to host disk.
	I1201 10:17:20.400023    8671 start.go:365] acquiring machines lock for no-preload-322000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:20.400050    8671 start.go:369] acquired machines lock for "no-preload-322000" in 21.542µs
	I1201 10:17:20.400060    8671 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:20.400066    8671 fix.go:54] fixHost starting: 
	I1201 10:17:20.400186    8671 fix.go:102] recreateIfNeeded on no-preload-322000: state=Stopped err=<nil>
	W1201 10:17:20.400195    8671 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:20.407445    8671 out.go:177] * Restarting existing qemu2 VM for "no-preload-322000" ...
	I1201 10:17:20.411404    8671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c3:01:62:32:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:20.413605    8671 main.go:141] libmachine: STDOUT: 
	I1201 10:17:20.413629    8671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:20.413662    8671 fix.go:56] fixHost completed within 13.594542ms
	I1201 10:17:20.413666    8671 start.go:83] releasing machines lock for "no-preload-322000", held for 13.6115ms
	W1201 10:17:20.413674    8671 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:20.413715    8671 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:20.413720    8671 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:25.414231    8671 start.go:365] acquiring machines lock for no-preload-322000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:25.414314    8671 start.go:369] acquired machines lock for "no-preload-322000" in 64.75µs
	I1201 10:17:25.414340    8671 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:25.414343    8671 fix.go:54] fixHost starting: 
	I1201 10:17:25.414491    8671 fix.go:102] recreateIfNeeded on no-preload-322000: state=Stopped err=<nil>
	W1201 10:17:25.414496    8671 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:25.419246    8671 out.go:177] * Restarting existing qemu2 VM for "no-preload-322000" ...
	I1201 10:17:25.426164    8671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c3:01:62:32:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/no-preload-322000/disk.qcow2
	I1201 10:17:25.428135    8671 main.go:141] libmachine: STDOUT: 
	I1201 10:17:25.428151    8671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:25.428174    8671 fix.go:56] fixHost completed within 13.830917ms
	I1201 10:17:25.428178    8671 start.go:83] releasing machines lock for "no-preload-322000", held for 13.860417ms
	W1201 10:17:25.428226    8671 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:25.432552    8671 out.go:177] 
	W1201 10:17:25.436215    8671 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:25.436221    8671 out.go:239] * 
	* 
	W1201 10:17:25.436667    8671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:25.448229    8671 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (34.547417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-277000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (32.747667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-277000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-277000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-277000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.022417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-277000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-277000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (31.037084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-277000 image list --format=json
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (31.081917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-277000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-277000 --alsologtostderr -v=1: exit status 89 (47.270916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-277000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:25.157315    8690 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:25.157457    8690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.157461    8690 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:25.157464    8690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.157595    8690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:25.157814    8690 out.go:303] Setting JSON to false
	I1201 10:17:25.157823    8690 mustload.go:65] Loading cluster: old-k8s-version-277000
	I1201 10:17:25.158025    8690 config.go:182] Loaded profile config "old-k8s-version-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1201 10:17:25.161309    8690 out.go:177] * The control plane node must be running for this command
	I1201 10:17:25.170210    8690 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-277000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-277000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (32.601459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (30.718375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-322000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (33.088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-322000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.285375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-322000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (33.339084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-322000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (33.706125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.963944208s)

                                                
                                                
-- stdout --
	* [embed-certs-920000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-920000 in cluster embed-certs-920000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-920000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:25.661422    8723 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:25.661602    8723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.661607    8723 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:25.661609    8723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.661734    8723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:25.662988    8723 out.go:303] Setting JSON to false
	I1201 10:17:25.680489    8723 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2819,"bootTime":1701451826,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:25.680560    8723 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:25.684688    8723 out.go:177] * [embed-certs-920000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:25.692685    8723 notify.go:220] Checking for updates...
	I1201 10:17:25.695716    8723 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:25.702734    8723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:25.709646    8723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:25.716548    8723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:25.723678    8723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:25.734637    8723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:25.737990    8723 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:25.738056    8723 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1201 10:17:25.738105    8723 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:25.741643    8723 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:17:25.748654    8723 start.go:298] selected driver: qemu2
	I1201 10:17:25.748670    8723 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:17:25.748677    8723 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:25.751344    8723 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:17:25.754685    8723 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:17:25.758825    8723 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:25.758895    8723 cni.go:84] Creating CNI manager for ""
	I1201 10:17:25.758904    8723 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:25.758909    8723 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:17:25.758915    8723 start_flags.go:323] config:
	{Name:embed-certs-920000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:25.763475    8723 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:25.770525    8723 out.go:177] * Starting control plane node embed-certs-920000 in cluster embed-certs-920000
	I1201 10:17:25.774657    8723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:17:25.774693    8723 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:17:25.774715    8723 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:25.774787    8723 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:25.774792    8723 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:17:25.774857    8723 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/embed-certs-920000/config.json ...
	I1201 10:17:25.774866    8723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/embed-certs-920000/config.json: {Name:mk3bcf93b45613454abbc02ad90a8907fe4ebded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:17:25.775086    8723 start.go:365] acquiring machines lock for embed-certs-920000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:25.775111    8723 start.go:369] acquired machines lock for "embed-certs-920000" in 17.917µs
	I1201 10:17:25.775126    8723 start.go:93] Provisioning new machine with config: &{Name:embed-certs-920000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:25.775159    8723 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:25.783687    8723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:25.798752    8723 start.go:159] libmachine.API.Create for "embed-certs-920000" (driver="qemu2")
	I1201 10:17:25.798787    8723 client.go:168] LocalClient.Create starting
	I1201 10:17:25.798888    8723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:25.798919    8723 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:25.798933    8723 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:25.798978    8723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:25.799002    8723 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:25.799011    8723 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:25.799373    8723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:25.978709    8723 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:26.038451    8723 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:26.038459    8723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:26.038610    8723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:26.051194    8723 main.go:141] libmachine: STDOUT: 
	I1201 10:17:26.051217    8723 main.go:141] libmachine: STDERR: 
	I1201 10:17:26.051282    8723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2 +20000M
	I1201 10:17:26.063336    8723 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:26.063355    8723 main.go:141] libmachine: STDERR: 
	I1201 10:17:26.063388    8723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:26.063393    8723 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:26.063441    8723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:0b:45:b4:e1:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:26.065303    8723 main.go:141] libmachine: STDOUT: 
	I1201 10:17:26.065322    8723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:26.065345    8723 client.go:171] LocalClient.Create took 266.558167ms
	I1201 10:17:28.067645    8723 start.go:128] duration metric: createHost completed in 2.292492625s
	I1201 10:17:28.067728    8723 start.go:83] releasing machines lock for "embed-certs-920000", held for 2.292661417s
	W1201 10:17:28.067828    8723 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:28.098699    8723 out.go:177] * Deleting "embed-certs-920000" in qemu2 ...
	W1201 10:17:28.117408    8723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:28.117429    8723 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:33.119553    8723 start.go:365] acquiring machines lock for embed-certs-920000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:33.119944    8723 start.go:369] acquired machines lock for "embed-certs-920000" in 308.292µs
	I1201 10:17:33.120095    8723 start.go:93] Provisioning new machine with config: &{Name:embed-certs-920000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:33.120358    8723 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:33.140089    8723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:33.187463    8723 start.go:159] libmachine.API.Create for "embed-certs-920000" (driver="qemu2")
	I1201 10:17:33.187509    8723 client.go:168] LocalClient.Create starting
	I1201 10:17:33.187643    8723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:33.187706    8723 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:33.187726    8723 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:33.187795    8723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:33.187849    8723 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:33.187867    8723 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:33.188354    8723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:33.332144    8723 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:33.510136    8723 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:33.510143    8723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:33.510336    8723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:33.522870    8723 main.go:141] libmachine: STDOUT: 
	I1201 10:17:33.522895    8723 main.go:141] libmachine: STDERR: 
	I1201 10:17:33.522971    8723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2 +20000M
	I1201 10:17:33.533452    8723 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:33.533467    8723 main.go:141] libmachine: STDERR: 
	I1201 10:17:33.533487    8723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:33.533493    8723 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:33.533526    8723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:40:85:86:90:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:33.535164    8723 main.go:141] libmachine: STDOUT: 
	I1201 10:17:33.535182    8723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:33.535194    8723 client.go:171] LocalClient.Create took 347.683709ms
	I1201 10:17:35.537330    8723 start.go:128] duration metric: createHost completed in 2.417003375s
	I1201 10:17:35.537400    8723 start.go:83] releasing machines lock for "embed-certs-920000", held for 2.417484958s
	W1201 10:17:35.537912    8723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-920000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-920000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:35.555616    8723 out.go:177] 
	W1201 10:17:35.559599    8723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:35.559630    8723 out.go:239] * 
	* 
	W1201 10:17:35.562186    8723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:35.573615    8723 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (72.217834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1: exit status 89 (58.778541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-322000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:25.698370    8728 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:25.699697    8728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.699701    8728 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:25.699704    8728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:25.699837    8728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:25.700047    8728 out.go:303] Setting JSON to false
	I1201 10:17:25.700057    8728 mustload.go:65] Loading cluster: no-preload-322000
	I1201 10:17:25.700265    8728 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1201 10:17:25.709658    8728 out.go:177] * The control plane node must be running for this command
	I1201 10:17:25.720669    8728 out.go:177]   To start a cluster, run: "minikube start -p no-preload-322000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (32.886041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (31.513667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (11.837023292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-409000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-409000 in cluster default-k8s-diff-port-409000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-409000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:26.450392    8770 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:26.450512    8770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:26.450515    8770 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:26.450517    8770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:26.450652    8770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:26.451731    8770 out.go:303] Setting JSON to false
	I1201 10:17:26.467720    8770 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2820,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:26.467804    8770 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:26.472222    8770 out.go:177] * [default-k8s-diff-port-409000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:26.486216    8770 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:26.481355    8770 notify.go:220] Checking for updates...
	I1201 10:17:26.493178    8770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:26.500213    8770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:26.504201    8770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:26.512271    8770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:26.520206    8770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:26.524640    8770 config.go:182] Loaded profile config "embed-certs-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:26.524704    8770 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:26.524743    8770 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:26.528216    8770 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:17:26.536203    8770 start.go:298] selected driver: qemu2
	I1201 10:17:26.536211    8770 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:17:26.536224    8770 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:26.538938    8770 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:17:26.543245    8770 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:17:26.547334    8770 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:26.547394    8770 cni.go:84] Creating CNI manager for ""
	I1201 10:17:26.547402    8770 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:26.547413    8770 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:17:26.547420    8770 start_flags.go:323] config:
	{Name:default-k8s-diff-port-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:26.552726    8770 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:26.561281    8770 out.go:177] * Starting control plane node default-k8s-diff-port-409000 in cluster default-k8s-diff-port-409000
	I1201 10:17:26.564165    8770 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:17:26.564196    8770 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:17:26.564204    8770 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:26.564290    8770 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:26.564298    8770 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:17:26.564376    8770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/default-k8s-diff-port-409000/config.json ...
	I1201 10:17:26.564389    8770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/default-k8s-diff-port-409000/config.json: {Name:mk853bac4d491b3ada7d9d9d4c2a8a2ea6de2071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:17:26.564782    8770 start.go:365] acquiring machines lock for default-k8s-diff-port-409000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:28.067891    8770 start.go:369] acquired machines lock for "default-k8s-diff-port-409000" in 1.5030615s
	I1201 10:17:28.067980    8770 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:28.068180    8770 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:28.085761    8770 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:28.133713    8770 start.go:159] libmachine.API.Create for "default-k8s-diff-port-409000" (driver="qemu2")
	I1201 10:17:28.133757    8770 client.go:168] LocalClient.Create starting
	I1201 10:17:28.133873    8770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:28.133929    8770 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:28.133946    8770 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:28.134018    8770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:28.134058    8770 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:28.134073    8770 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:28.134652    8770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:28.273052    8770 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:28.519181    8770 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:28.519191    8770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:28.519378    8770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:28.531761    8770 main.go:141] libmachine: STDOUT: 
	I1201 10:17:28.531780    8770 main.go:141] libmachine: STDERR: 
	I1201 10:17:28.531835    8770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2 +20000M
	I1201 10:17:28.542336    8770 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:28.542364    8770 main.go:141] libmachine: STDERR: 
	I1201 10:17:28.542383    8770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:28.542392    8770 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:28.542424    8770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:d8:df:af:17:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:28.544082    8770 main.go:141] libmachine: STDOUT: 
	I1201 10:17:28.544096    8770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:28.544116    8770 client.go:171] LocalClient.Create took 410.357958ms
	I1201 10:17:30.546272    8770 start.go:128] duration metric: createHost completed in 2.478125625s
	I1201 10:17:30.546319    8770 start.go:83] releasing machines lock for "default-k8s-diff-port-409000", held for 2.47846025s
	W1201 10:17:30.546380    8770 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:30.555037    8770 out.go:177] * Deleting "default-k8s-diff-port-409000" in qemu2 ...
	W1201 10:17:30.582310    8770 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:30.582348    8770 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:35.584380    8770 start.go:365] acquiring machines lock for default-k8s-diff-port-409000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:35.584729    8770 start.go:369] acquired machines lock for "default-k8s-diff-port-409000" in 272.458µs
	I1201 10:17:35.584854    8770 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:35.585098    8770 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:35.595573    8770 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:35.642363    8770 start.go:159] libmachine.API.Create for "default-k8s-diff-port-409000" (driver="qemu2")
	I1201 10:17:35.642399    8770 client.go:168] LocalClient.Create starting
	I1201 10:17:35.642501    8770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:35.642554    8770 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:35.642571    8770 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:35.642624    8770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:35.642651    8770 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:35.642666    8770 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:35.643141    8770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:35.790294    8770 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:36.151407    8770 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:36.151414    8770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:36.151596    8770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:36.168233    8770 main.go:141] libmachine: STDOUT: 
	I1201 10:17:36.168254    8770 main.go:141] libmachine: STDERR: 
	I1201 10:17:36.168317    8770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2 +20000M
	I1201 10:17:36.184102    8770 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:36.184134    8770 main.go:141] libmachine: STDERR: 
	I1201 10:17:36.184149    8770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:36.184154    8770 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:36.184190    8770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ac:17:a8:36:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:36.186094    8770 main.go:141] libmachine: STDOUT: 
	I1201 10:17:36.186111    8770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:36.186123    8770 client.go:171] LocalClient.Create took 543.732334ms
	I1201 10:17:38.188252    8770 start.go:128] duration metric: createHost completed in 2.603188083s
	I1201 10:17:38.188301    8770 start.go:83] releasing machines lock for "default-k8s-diff-port-409000", held for 2.60361025s
	W1201 10:17:38.188668    8770 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:38.212236    8770 out.go:177] 
	W1201 10:17:38.224323    8770 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:38.224385    8770 out.go:239] * 
	* 
	W1201 10:17:38.226841    8770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:38.240298    8770 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (66.272834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-920000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-920000 create -f testdata/busybox.yaml: exit status 1 (29.880959ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-920000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (35.009916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (35.054791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-920000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-920000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-920000 describe deploy/metrics-server -n kube-system: exit status 1 (26.193625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-920000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-920000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (31.696083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (7.26351375s)

                                                
                                                
-- stdout --
	* [embed-certs-920000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-920000 in cluster embed-certs-920000
	* Restarting existing qemu2 VM for "embed-certs-920000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-920000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:36.083033    8803 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:36.083182    8803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:36.083185    8803 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:36.083187    8803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:36.083309    8803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:36.084382    8803 out.go:303] Setting JSON to false
	I1201 10:17:36.100853    8803 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2830,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:36.100953    8803 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:36.104650    8803 out.go:177] * [embed-certs-920000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:36.115577    8803 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:36.111602    8803 notify.go:220] Checking for updates...
	I1201 10:17:36.123530    8803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:36.130551    8803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:36.134577    8803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:36.138563    8803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:36.150576    8803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:36.154825    8803 config.go:182] Loaded profile config "embed-certs-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:36.155070    8803 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:36.159402    8803 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:17:36.166523    8803 start.go:298] selected driver: qemu2
	I1201 10:17:36.166529    8803 start.go:902] validating driver "qemu2" against &{Name:embed-certs-920000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:36.166582    8803 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:36.168892    8803 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:36.168947    8803 cni.go:84] Creating CNI manager for ""
	I1201 10:17:36.168956    8803 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:36.168962    8803 start_flags.go:323] config:
	{Name:embed-certs-920000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-920000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:36.173194    8803 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:36.182678    8803 out.go:177] * Starting control plane node embed-certs-920000 in cluster embed-certs-920000
	I1201 10:17:36.187551    8803 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:17:36.187583    8803 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:17:36.187594    8803 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:36.187672    8803 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:36.187678    8803 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:17:36.187758    8803 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/embed-certs-920000/config.json ...
	I1201 10:17:36.188152    8803 start.go:365] acquiring machines lock for embed-certs-920000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:38.188486    8803 start.go:369] acquired machines lock for "embed-certs-920000" in 2.000293375s
	I1201 10:17:38.188583    8803 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:38.188634    8803 fix.go:54] fixHost starting: 
	I1201 10:17:38.189291    8803 fix.go:102] recreateIfNeeded on embed-certs-920000: state=Stopped err=<nil>
	W1201 10:17:38.189337    8803 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:38.221260    8803 out.go:177] * Restarting existing qemu2 VM for "embed-certs-920000" ...
	I1201 10:17:38.228504    8803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:40:85:86:90:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:38.238135    8803 main.go:141] libmachine: STDOUT: 
	I1201 10:17:38.238238    8803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:38.238392    8803 fix.go:56] fixHost completed within 49.764042ms
	I1201 10:17:38.238411    8803 start.go:83] releasing machines lock for "embed-certs-920000", held for 49.891708ms
	W1201 10:17:38.238472    8803 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:38.238710    8803 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:38.238734    8803 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:43.240857    8803 start.go:365] acquiring machines lock for embed-certs-920000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:43.241182    8803 start.go:369] acquired machines lock for "embed-certs-920000" in 236.75µs
	I1201 10:17:43.241288    8803 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:43.241308    8803 fix.go:54] fixHost starting: 
	I1201 10:17:43.242064    8803 fix.go:102] recreateIfNeeded on embed-certs-920000: state=Stopped err=<nil>
	W1201 10:17:43.242093    8803 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:43.258799    8803 out.go:177] * Restarting existing qemu2 VM for "embed-certs-920000" ...
	I1201 10:17:43.263844    8803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:40:85:86:90:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/embed-certs-920000/disk.qcow2
	I1201 10:17:43.273469    8803 main.go:141] libmachine: STDOUT: 
	I1201 10:17:43.273548    8803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:43.273633    8803 fix.go:56] fixHost completed within 32.326833ms
	I1201 10:17:43.273649    8803 start.go:83] releasing machines lock for "embed-certs-920000", held for 32.446208ms
	W1201 10:17:43.273874    8803 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-920000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-920000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:43.286671    8803 out.go:177] 
	W1201 10:17:43.291787    8803 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:43.291818    8803 out.go:239] * 
	* 
	W1201 10:17:43.294311    8803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:43.302506    8803 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-920000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (67.172791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-409000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-409000 create -f testdata/busybox.yaml: exit status 1 (28.289333ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-409000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (30.815459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (30.562375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-409000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-409000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-409000 describe deploy/metrics-server -n kube-system: exit status 1 (25.791958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-409000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (30.995292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.207610834s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-409000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-409000 in cluster default-k8s-diff-port-409000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:38.716479    8831 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:38.716624    8831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:38.716627    8831 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:38.716630    8831 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:38.716769    8831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:38.717795    8831 out.go:303] Setting JSON to false
	I1201 10:17:38.733758    8831 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2832,"bootTime":1701451826,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:38.733859    8831 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:38.739289    8831 out.go:177] * [default-k8s-diff-port-409000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:38.751132    8831 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:38.746308    8831 notify.go:220] Checking for updates...
	I1201 10:17:38.758266    8831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:38.765217    8831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:38.772205    8831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:38.779192    8831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:38.787232    8831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:38.791551    8831 config.go:182] Loaded profile config "default-k8s-diff-port-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:38.791833    8831 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:38.795269    8831 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:17:38.802291    8831 start.go:298] selected driver: qemu2
	I1201 10:17:38.802297    8831 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:38.802395    8831 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:38.805012    8831 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 10:17:38.805065    8831 cni.go:84] Creating CNI manager for ""
	I1201 10:17:38.805074    8831 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:38.805082    8831 start_flags.go:323] config:
	{Name:default-k8s-diff-port-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-4090
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:38.809802    8831 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:38.818304    8831 out.go:177] * Starting control plane node default-k8s-diff-port-409000 in cluster default-k8s-diff-port-409000
	I1201 10:17:38.823222    8831 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:17:38.823246    8831 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:17:38.823255    8831 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:38.823309    8831 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:38.823314    8831 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1201 10:17:38.823380    8831 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/default-k8s-diff-port-409000/config.json ...
	I1201 10:17:38.823877    8831 start.go:365] acquiring machines lock for default-k8s-diff-port-409000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:38.823906    8831 start.go:369] acquired machines lock for "default-k8s-diff-port-409000" in 22.084µs
	I1201 10:17:38.823916    8831 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:38.823925    8831 fix.go:54] fixHost starting: 
	I1201 10:17:38.824049    8831 fix.go:102] recreateIfNeeded on default-k8s-diff-port-409000: state=Stopped err=<nil>
	W1201 10:17:38.824061    8831 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:38.828146    8831 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-409000" ...
	I1201 10:17:38.836292    8831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ac:17:a8:36:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:38.838649    8831 main.go:141] libmachine: STDOUT: 
	I1201 10:17:38.838675    8831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:38.838711    8831 fix.go:56] fixHost completed within 14.786ms
	I1201 10:17:38.838716    8831 start.go:83] releasing machines lock for "default-k8s-diff-port-409000", held for 14.805292ms
	W1201 10:17:38.838724    8831 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:38.838767    8831 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:38.838773    8831 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:43.838793    8831 start.go:365] acquiring machines lock for default-k8s-diff-port-409000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:43.838892    8831 start.go:369] acquired machines lock for "default-k8s-diff-port-409000" in 67.75µs
	I1201 10:17:43.838910    8831 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:43.838916    8831 fix.go:54] fixHost starting: 
	I1201 10:17:43.839053    8831 fix.go:102] recreateIfNeeded on default-k8s-diff-port-409000: state=Stopped err=<nil>
	W1201 10:17:43.839059    8831 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:43.844581    8831 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-409000" ...
	I1201 10:17:43.852443    8831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ac:17:a8:36:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/default-k8s-diff-port-409000/disk.qcow2
	I1201 10:17:43.854469    8831 main.go:141] libmachine: STDOUT: 
	I1201 10:17:43.854487    8831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:43.854513    8831 fix.go:56] fixHost completed within 15.597417ms
	I1201 10:17:43.854517    8831 start.go:83] releasing machines lock for "default-k8s-diff-port-409000", held for 15.621458ms
	W1201 10:17:43.854556    8831 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:43.862571    8831 out.go:177] 
	W1201 10:17:43.866509    8831 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:43.866514    8831 out.go:239] * 
	* 
	W1201 10:17:43.866966    8831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:43.881586    8831 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-409000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (34.235708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-920000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (33.016333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-920000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.5145ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-920000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (30.641583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-920000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (30.348417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-920000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-920000 --alsologtostderr -v=1: exit status 89 (43.20875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-920000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:43.574221    8860 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:43.574382    8860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:43.574385    8860 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:43.574388    8860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:43.574510    8860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:43.574753    8860 out.go:303] Setting JSON to false
	I1201 10:17:43.574763    8860 mustload.go:65] Loading cluster: embed-certs-920000
	I1201 10:17:43.574946    8860 config.go:182] Loaded profile config "embed-certs-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:43.578562    8860 out.go:177] * The control plane node must be running for this command
	I1201 10:17:43.582626    8860 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-920000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-920000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (30.941791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (30.402125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-409000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (32.994583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-409000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.480291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (32.6415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-409000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (32.672416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (9.87401475s)

                                                
                                                
-- stdout --
	* [newest-cni-205000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-205000 in cluster newest-cni-205000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-205000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:44.094562    8893 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:44.094723    8893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:44.094725    8893 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:44.094731    8893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:44.094868    8893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:44.095930    8893 out.go:303] Setting JSON to false
	I1201 10:17:44.113662    8893 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2838,"bootTime":1701451826,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:44.113738    8893 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:44.119048    8893 out.go:177] * [newest-cni-205000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:44.131130    8893 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:44.127063    8893 notify.go:220] Checking for updates...
	I1201 10:17:44.139100    8893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:44.149979    8893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:44.158034    8893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:44.168981    8893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:44.176025    8893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:44.180386    8893 config.go:182] Loaded profile config "default-k8s-diff-port-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:44.180450    8893 config.go:182] Loaded profile config "multinode-486000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:44.180502    8893 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:44.185005    8893 out.go:177] * Using the qemu2 driver based on user configuration
	I1201 10:17:44.192051    8893 start.go:298] selected driver: qemu2
	I1201 10:17:44.192062    8893 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:17:44.192068    8893 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:44.194428    8893 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1201 10:17:44.194457    8893 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1201 10:17:44.201980    8893 out.go:177] * Automatically selected the socket_vmnet network
	I1201 10:17:44.205077    8893 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 10:17:44.205112    8893 cni.go:84] Creating CNI manager for ""
	I1201 10:17:44.205121    8893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:44.205125    8893 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 10:17:44.205130    8893 start_flags.go:323] config:
	{Name:newest-cni-205000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-205000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:44.210265    8893 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:44.218898    8893 out.go:177] * Starting control plane node newest-cni-205000 in cluster newest-cni-205000
	I1201 10:17:44.223011    8893 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:17:44.223030    8893 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1201 10:17:44.223038    8893 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:44.223139    8893 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:44.223146    8893 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1201 10:17:44.223209    8893 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/newest-cni-205000/config.json ...
	I1201 10:17:44.223219    8893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/newest-cni-205000/config.json: {Name:mkf4ba4f8452a741ba5405844ff99e7caee9c53b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:17:44.223413    8893 start.go:365] acquiring machines lock for newest-cni-205000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:44.223437    8893 start.go:369] acquired machines lock for "newest-cni-205000" in 18.834µs
	I1201 10:17:44.223448    8893 start.go:93] Provisioning new machine with config: &{Name:newest-cni-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-205000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:44.223483    8893 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:44.228977    8893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:44.244228    8893 start.go:159] libmachine.API.Create for "newest-cni-205000" (driver="qemu2")
	I1201 10:17:44.244252    8893 client.go:168] LocalClient.Create starting
	I1201 10:17:44.244312    8893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:44.244341    8893 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:44.244351    8893 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:44.244390    8893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:44.244411    8893 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:44.244418    8893 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:44.244770    8893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:44.416497    8893 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:44.463822    8893 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:44.463830    8893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:44.464003    8893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:44.476393    8893 main.go:141] libmachine: STDOUT: 
	I1201 10:17:44.476414    8893 main.go:141] libmachine: STDERR: 
	I1201 10:17:44.476471    8893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2 +20000M
	I1201 10:17:44.488221    8893 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:44.488261    8893 main.go:141] libmachine: STDERR: 
	I1201 10:17:44.488277    8893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:44.488283    8893 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:44.488315    8893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:88:0b:49:a4:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:44.490176    8893 main.go:141] libmachine: STDOUT: 
	I1201 10:17:44.490191    8893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:44.490215    8893 client.go:171] LocalClient.Create took 245.962125ms
	I1201 10:17:46.492381    8893 start.go:128] duration metric: createHost completed in 2.268918959s
	I1201 10:17:46.492467    8893 start.go:83] releasing machines lock for "newest-cni-205000", held for 2.269075292s
	W1201 10:17:46.492543    8893 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:46.512878    8893 out.go:177] * Deleting "newest-cni-205000" in qemu2 ...
	W1201 10:17:46.541557    8893 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:46.541592    8893 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:51.543701    8893 start.go:365] acquiring machines lock for newest-cni-205000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:51.544128    8893 start.go:369] acquired machines lock for "newest-cni-205000" in 324.334µs
	I1201 10:17:51.544215    8893 start.go:93] Provisioning new machine with config: &{Name:newest-cni-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-205000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1201 10:17:51.544539    8893 start.go:125] createHost starting for "" (driver="qemu2")
	I1201 10:17:51.561258    8893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1201 10:17:51.609412    8893 start.go:159] libmachine.API.Create for "newest-cni-205000" (driver="qemu2")
	I1201 10:17:51.609451    8893 client.go:168] LocalClient.Create starting
	I1201 10:17:51.609608    8893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/ca.pem
	I1201 10:17:51.609672    8893 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:51.609696    8893 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:51.609758    8893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17703-5375/.minikube/certs/cert.pem
	I1201 10:17:51.609806    8893 main.go:141] libmachine: Decoding PEM data...
	I1201 10:17:51.609821    8893 main.go:141] libmachine: Parsing certificate...
	I1201 10:17:51.610359    8893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso...
	I1201 10:17:51.752823    8893 main.go:141] libmachine: Creating SSH key...
	I1201 10:17:51.849401    8893 main.go:141] libmachine: Creating Disk image...
	I1201 10:17:51.849408    8893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1201 10:17:51.849571    8893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:51.861640    8893 main.go:141] libmachine: STDOUT: 
	I1201 10:17:51.861658    8893 main.go:141] libmachine: STDERR: 
	I1201 10:17:51.861709    8893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2 +20000M
	I1201 10:17:51.872229    8893 main.go:141] libmachine: STDOUT: Image resized.
	
	I1201 10:17:51.872244    8893 main.go:141] libmachine: STDERR: 
	I1201 10:17:51.872267    8893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:51.872274    8893 main.go:141] libmachine: Starting QEMU VM...
	I1201 10:17:51.872310    8893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:2b:61:29:d0:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:51.873919    8893 main.go:141] libmachine: STDOUT: 
	I1201 10:17:51.873936    8893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:51.873949    8893 client.go:171] LocalClient.Create took 264.499625ms
	I1201 10:17:53.876070    8893 start.go:128] duration metric: createHost completed in 2.331555083s
	I1201 10:17:53.876135    8893 start.go:83] releasing machines lock for "newest-cni-205000", held for 2.332039542s
	W1201 10:17:53.876595    8893 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:53.902363    8893 out.go:177] 
	W1201 10:17:53.906391    8893 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:53.906477    8893 out.go:239] * 
	* 
	W1201 10:17:53.909139    8893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:53.922336    8893 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (66.625292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-409000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-409000 --alsologtostderr -v=1: exit status 89 (61.95475ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-409000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:44.129888    8898 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:44.131190    8898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:44.131195    8898 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:44.131198    8898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:44.131379    8898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:44.131577    8898 out.go:303] Setting JSON to false
	I1201 10:17:44.131587    8898 mustload.go:65] Loading cluster: default-k8s-diff-port-409000
	I1201 10:17:44.131788    8898 config.go:182] Loaded profile config "default-k8s-diff-port-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:17:44.143031    8898 out.go:177] * The control plane node must be running for this command
	I1201 10:17:44.154058    8898 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-409000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-409000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (31.955083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (35.200542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (5.220238834s)

                                                
                                                
-- stdout --
	* [newest-cni-205000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-205000 in cluster newest-cni-205000
	* Restarting existing qemu2 VM for "newest-cni-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:54.267083    8946 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:54.267229    8946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:54.267232    8946 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:54.267235    8946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:54.267360    8946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:54.268312    8946 out.go:303] Setting JSON to false
	I1201 10:17:54.284254    8946 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2848,"bootTime":1701451826,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:17:54.284341    8946 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:17:54.289836    8946 out.go:177] * [newest-cni-205000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:17:54.300851    8946 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:17:54.296830    8946 notify.go:220] Checking for updates...
	I1201 10:17:54.307755    8946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:17:54.314873    8946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:17:54.322776    8946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:17:54.330748    8946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:17:54.338858    8946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:17:54.343131    8946 config.go:182] Loaded profile config "newest-cni-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1201 10:17:54.343415    8946 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:17:54.347824    8946 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:17:54.353840    8946 start.go:298] selected driver: qemu2
	I1201 10:17:54.353846    8946 start.go:902] validating driver "qemu2" against &{Name:newest-cni-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-205000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:54.353918    8946 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:17:54.356541    8946 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1201 10:17:54.356589    8946 cni.go:84] Creating CNI manager for ""
	I1201 10:17:54.356597    8946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:17:54.356603    8946 start_flags.go:323] config:
	{Name:newest-cni-205000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-205000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:17:54.361256    8946 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:17:54.366851    8946 out.go:177] * Starting control plane node newest-cni-205000 in cluster newest-cni-205000
	I1201 10:17:54.370881    8946 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:17:54.370915    8946 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1201 10:17:54.370933    8946 cache.go:56] Caching tarball of preloaded images
	I1201 10:17:54.371038    8946 preload.go:174] Found /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1201 10:17:54.371059    8946 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1201 10:17:54.371139    8946 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/newest-cni-205000/config.json ...
	I1201 10:17:54.371594    8946 start.go:365] acquiring machines lock for newest-cni-205000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:54.371625    8946 start.go:369] acquired machines lock for "newest-cni-205000" in 24.209µs
	I1201 10:17:54.371636    8946 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:54.371643    8946 fix.go:54] fixHost starting: 
	I1201 10:17:54.371774    8946 fix.go:102] recreateIfNeeded on newest-cni-205000: state=Stopped err=<nil>
	W1201 10:17:54.371785    8946 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:54.375825    8946 out.go:177] * Restarting existing qemu2 VM for "newest-cni-205000" ...
	I1201 10:17:54.382902    8946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:2b:61:29:d0:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:54.385225    8946 main.go:141] libmachine: STDOUT: 
	I1201 10:17:54.385247    8946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:54.385297    8946 fix.go:56] fixHost completed within 13.653625ms
	I1201 10:17:54.385302    8946 start.go:83] releasing machines lock for "newest-cni-205000", held for 13.672209ms
	W1201 10:17:54.385312    8946 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:54.385353    8946 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:54.385359    8946 start.go:709] Will try again in 5 seconds ...
	I1201 10:17:59.387453    8946 start.go:365] acquiring machines lock for newest-cni-205000: {Name:mkcf7d41d46fb678da15577fccf899a800d25667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 10:17:59.387765    8946 start.go:369] acquired machines lock for "newest-cni-205000" in 234.833µs
	I1201 10:17:59.387865    8946 start.go:96] Skipping create...Using existing machine configuration
	I1201 10:17:59.387883    8946 fix.go:54] fixHost starting: 
	I1201 10:17:59.388501    8946 fix.go:102] recreateIfNeeded on newest-cni-205000: state=Stopped err=<nil>
	W1201 10:17:59.388529    8946 fix.go:128] unexpected machine state, will restart: <nil>
	I1201 10:17:59.399217    8946 out.go:177] * Restarting existing qemu2 VM for "newest-cni-205000" ...
	I1201 10:17:59.413510    8946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:2b:61:29:d0:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17703-5375/.minikube/machines/newest-cni-205000/disk.qcow2
	I1201 10:17:59.422717    8946 main.go:141] libmachine: STDOUT: 
	I1201 10:17:59.422796    8946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1201 10:17:59.422867    8946 fix.go:56] fixHost completed within 34.986291ms
	I1201 10:17:59.422885    8946 start.go:83] releasing machines lock for "newest-cni-205000", held for 35.098125ms
	W1201 10:17:59.423113    8946 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1201 10:17:59.429204    8946 out.go:177] 
	W1201 10:17:59.432370    8946 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1201 10:17:59.432400    8946 out.go:239] * 
	* 
	W1201 10:17:59.434492    8946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:17:59.443226    8946 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (69.972083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-205000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (32.24475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-205000 --alsologtostderr -v=1: exit status 89 (48.203708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-205000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:17:59.636715    8961 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:17:59.636893    8961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:59.636896    8961 out.go:309] Setting ErrFile to fd 2...
	I1201 10:17:59.636898    8961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:17:59.637013    8961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:17:59.637219    8961 out.go:303] Setting JSON to false
	I1201 10:17:59.637229    8961 mustload.go:65] Loading cluster: newest-cni-205000
	I1201 10:17:59.637410    8961 config.go:182] Loaded profile config "newest-cni-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1201 10:17:59.641864    8961 out.go:177] * The control plane node must be running for this command
	I1201 10:17:59.649758    8961 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-205000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-205000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (32.41425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (31.818125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (81/247)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.12
10 TestDownloadOnly/v1.28.4/json-events 7.79
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.1/json-events 7.14
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.1
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
26 TestBinaryMirror 0.38
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
40 TestHyperKitDriverInstallOrUpdate 8.15
44 TestErrorSpam/start 0.45
45 TestErrorSpam/status 0.1
46 TestErrorSpam/pause 0.14
47 TestErrorSpam/unpause 0.14
48 TestErrorSpam/stop 0.18
51 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/CacheCmd/cache/add_remote 1.84
60 TestFunctional/serial/CacheCmd/cache/add_local 1.17
61 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
62 TestFunctional/serial/CacheCmd/cache/list 0.04
65 TestFunctional/serial/CacheCmd/cache/delete 0.07
74 TestFunctional/parallel/ConfigCmd 0.25
76 TestFunctional/parallel/DryRun 0.33
77 TestFunctional/parallel/InternationalLanguage 0.13
83 TestFunctional/parallel/AddonsCmd 0.12
98 TestFunctional/parallel/License 0.21
99 TestFunctional/parallel/Version/short 0.04
106 TestFunctional/parallel/ImageCommands/Setup 1.46
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
131 TestFunctional/parallel/ProfileCmd/profile_list 0.11
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
140 TestFunctional/delete_addon-resizer_images 0.18
141 TestFunctional/delete_my-image_image 0.04
142 TestFunctional/delete_minikube_cached_images 0.04
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.06
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 0.05
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.34
183 TestMainNoArgs 0.03
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
246 TestNoKubernetes/serial/ProfileList 0.15
247 TestNoKubernetes/serial/Stop 0.06
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
267 TestStartStop/group/old-k8s-version/serial/Stop 0.09
268 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
272 TestStartStop/group/no-preload/serial/Stop 0.06
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
289 TestStartStop/group/embed-certs/serial/Stop 0.06
290 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
309 TestStartStop/group/newest-cni/serial/Stop 0.07
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-993000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-993000: exit status 85 (124.283625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/01 10:03:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 10:03:06.058288    5827 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:03:06.058458    5827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:06.058462    5827 out.go:309] Setting ErrFile to fd 2...
	I1201 10:03:06.058464    5827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:06.058581    5827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	W1201 10:03:06.058665    5827 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: no such file or directory
	I1201 10:03:06.059916    5827 out.go:303] Setting JSON to true
	I1201 10:03:06.077061    5827 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1960,"bootTime":1701451826,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:03:06.077136    5827 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:03:06.084877    5827 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:03:06.088799    5827 out.go:169] MINIKUBE_LOCATION=17703
	I1201 10:03:06.085024    5827 notify.go:220] Checking for updates...
	W1201 10:03:06.085070    5827 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball: no such file or directory
	I1201 10:03:06.111827    5827 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:03:06.115878    5827 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:03:06.123831    5827 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:03:06.131851    5827 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	W1201 10:03:06.138930    5827 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 10:03:06.139163    5827 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:03:06.143658    5827 out.go:97] Using the qemu2 driver based on user configuration
	I1201 10:03:06.143665    5827 start.go:298] selected driver: qemu2
	I1201 10:03:06.143670    5827 start.go:902] validating driver "qemu2" against <nil>
	I1201 10:03:06.143717    5827 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1201 10:03:06.147846    5827 out.go:169] Automatically selected the socket_vmnet network
	I1201 10:03:06.154612    5827 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1201 10:03:06.154721    5827 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 10:03:06.154856    5827 cni.go:84] Creating CNI manager for ""
	I1201 10:03:06.154878    5827 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1201 10:03:06.154885    5827 start_flags.go:323] config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:06.160158    5827 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:03:06.162879    5827 out.go:97] Downloading VM boot image ...
	I1201 10:03:06.162901    5827 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/iso/arm64/minikube-v1.32.1-1701387192-17703-arm64.iso
	I1201 10:03:13.658964    5827 out.go:97] Starting control plane node download-only-993000 in cluster download-only-993000
	I1201 10:03:13.658996    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:13.715667    5827 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:03:13.715684    5827 cache.go:56] Caching tarball of preloaded images
	I1201 10:03:13.715859    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:13.720014    5827 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1201 10:03:13.720022    5827 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:13.794571    5827 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1201 10:03:19.159736    5827 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:19.159889    5827 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:19.801020    5827 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1201 10:03:19.801233    5827 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/download-only-993000/config.json ...
	I1201 10:03:19.801249    5827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17703-5375/.minikube/profiles/download-only-993000/config.json: {Name:mk1c39f52642e4a0152308e0d2fa63bca04e3751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 10:03:19.801457    5827 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1201 10:03:19.801631    5827 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1201 10:03:20.168577    5827 out.go:169] 
	W1201 10:03:20.178774    5827 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17703-5375/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80 0x1064c0a80] Decompressors:map[bz2:0x14000801060 gz:0x14000801068 tar:0x14000801010 tar.bz2:0x14000801020 tar.gz:0x14000801030 tar.xz:0x14000801040 tar.zst:0x14000801050 tbz2:0x14000801020 tgz:0x14000801030 txz:0x14000801040 tzst:0x14000801050 xz:0x14000801070 zip:0x14000801080 zst:0x14000801078] Getters:map[file:0x1400261c770 http:0x14000516190 https:0x14000516500] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1201 10:03:20.178797    5827 out_reason.go:110] 
	W1201 10:03:20.190646    5827 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1201 10:03:20.193652    5827 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-993000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (7.784880708s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-993000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-993000: exit status 85 (89.987083ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/01 10:03:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 10:03:20.422416    5840 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:03:20.422553    5840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:20.422559    5840 out.go:309] Setting ErrFile to fd 2...
	I1201 10:03:20.422561    5840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:20.422687    5840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	W1201 10:03:20.422753    5840 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: no such file or directory
	I1201 10:03:20.423691    5840 out.go:303] Setting JSON to true
	I1201 10:03:20.439625    5840 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1974,"bootTime":1701451826,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:03:20.439729    5840 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:03:20.445382    5840 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:03:20.449389    5840 out.go:169] MINIKUBE_LOCATION=17703
	I1201 10:03:20.445461    5840 notify.go:220] Checking for updates...
	I1201 10:03:20.457379    5840 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:03:20.461374    5840 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:03:20.469406    5840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:03:20.477347    5840 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	W1201 10:03:20.484325    5840 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 10:03:20.484623    5840 config.go:182] Loaded profile config "download-only-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1201 10:03:20.484665    5840 start.go:810] api.Load failed for download-only-993000: filestore "download-only-993000": Docker machine "download-only-993000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1201 10:03:20.484718    5840 driver.go:392] Setting default libvirt URI to qemu:///system
	W1201 10:03:20.484736    5840 start.go:810] api.Load failed for download-only-993000: filestore "download-only-993000": Docker machine "download-only-993000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1201 10:03:20.488316    5840 out.go:97] Using the qemu2 driver based on existing profile
	I1201 10:03:20.488324    5840 start.go:298] selected driver: qemu2
	I1201 10:03:20.488328    5840 start.go:902] validating driver "qemu2" against &{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:20.490944    5840 cni.go:84] Creating CNI manager for ""
	I1201 10:03:20.490957    5840 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:03:20.490963    5840 start_flags.go:323] config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-993000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:20.495608    5840 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:03:20.499204    5840 out.go:97] Starting control plane node download-only-993000 in cluster download-only-993000
	I1201 10:03:20.499210    5840 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:03:20.551569    5840 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:03:20.551583    5840 cache.go:56] Caching tarball of preloaded images
	I1201 10:03:20.551734    5840 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1201 10:03:20.557908    5840 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1201 10:03:20.557915    5840 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:20.630550    5840 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1201 10:03:25.224860    5840 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:25.225006    5840 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-993000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (7.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-993000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=qemu2 : (7.136688375s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (7.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-993000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-993000: exit status 85 (96.56675ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-993000 | jenkins | v1.32.0 | 01 Dec 23 10:03 PST |          |
	|         | -p download-only-993000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/01 10:03:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 10:03:28.300310    5850 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:03:28.300457    5850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:28.300460    5850 out.go:309] Setting ErrFile to fd 2...
	I1201 10:03:28.300462    5850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:03:28.300589    5850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	W1201 10:03:28.300666    5850 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17703-5375/.minikube/config/config.json: no such file or directory
	I1201 10:03:28.301615    5850 out.go:303] Setting JSON to true
	I1201 10:03:28.317567    5850 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1982,"bootTime":1701451826,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:03:28.317648    5850 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:03:28.323661    5850 out.go:97] [download-only-993000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:03:28.327594    5850 out.go:169] MINIKUBE_LOCATION=17703
	I1201 10:03:28.323764    5850 notify.go:220] Checking for updates...
	I1201 10:03:28.335585    5850 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:03:28.342670    5850 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:03:28.346539    5850 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:03:28.350584    5850 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	W1201 10:03:28.356507    5850 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 10:03:28.356808    5850 config.go:182] Loaded profile config "download-only-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1201 10:03:28.356835    5850 start.go:810] api.Load failed for download-only-993000: filestore "download-only-993000": Docker machine "download-only-993000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1201 10:03:28.356890    5850 driver.go:392] Setting default libvirt URI to qemu:///system
	W1201 10:03:28.356913    5850 start.go:810] api.Load failed for download-only-993000: filestore "download-only-993000": Docker machine "download-only-993000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1201 10:03:28.360644    5850 out.go:97] Using the qemu2 driver based on existing profile
	I1201 10:03:28.360654    5850 start.go:298] selected driver: qemu2
	I1201 10:03:28.360658    5850 start.go:902] validating driver "qemu2" against &{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:28.363037    5850 cni.go:84] Creating CNI manager for ""
	I1201 10:03:28.363053    5850 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1201 10:03:28.363061    5850 start_flags.go:323] config:
	{Name:download-only-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-993000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:03:28.367561    5850 iso.go:125] acquiring lock: {Name:mk3aa781e939aa763c73ad5fb916ad9c0c6cf746 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 10:03:28.371442    5850 out.go:97] Starting control plane node download-only-993000 in cluster download-only-993000
	I1201 10:03:28.371449    5850 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:03:28.428976    5850 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1201 10:03:28.428986    5850 cache.go:56] Caching tarball of preloaded images
	I1201 10:03:28.429814    5850 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1201 10:03:28.433267    5850 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1201 10:03:28.433280    5850 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1201 10:03:28.515706    5850 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e6c70ba8af96149bcd57a348676cbfba -> /Users/jenkins/minikube-integration/17703-5375/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-993000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-993000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-407000 --alsologtostderr --binary-mirror http://127.0.0.1:50324 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-407000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-659000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-659000: exit status 85 (63.523083ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-659000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-659000: exit status 85 (66.318458ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 start --dry-run
--- PASS: TestErrorSpam/start (0.45s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status: exit status 7 (35.106208ms)

                                                
                                                
-- stdout --
	nospam-349000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status: exit status 7 (31.303834ms)

                                                
                                                
-- stdout --
	nospam-349000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status: exit status 7 (31.670041ms)

                                                
                                                
-- stdout --
	nospam-349000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause: exit status 89 (47.675042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause: exit status 89 (45.896292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause: exit status 89 (44.594125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 pause" failed: exit status 89
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause: exit status 89 (47.791833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause: exit status 89 (45.917417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause: exit status 89 (47.78875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-349000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 unpause" failed: exit status 89
--- PASS: TestErrorSpam/unpause (0.14s)

                                                
                                    
x
+
TestErrorSpam/stop (0.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 stop
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-349000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-349000 stop
--- PASS: TestErrorSpam/stop (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17703-5375/.minikube/files/etc/test/nested/copy/5825/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1605048184/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache add minikube-local-cache-test:functional-149000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 cache delete minikube-local-cache-test:functional-149000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-149000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 config get cpus: exit status 14 (32.78475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 config get cpus: exit status 14 (35.437875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-149000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (187.094667ms)

                                                
                                                
-- stdout --
	* [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:05:08.305034    6408 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:05:08.305246    6408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:08.305251    6408 out.go:309] Setting ErrFile to fd 2...
	I1201 10:05:08.305255    6408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:08.305416    6408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:05:08.306804    6408 out.go:303] Setting JSON to false
	I1201 10:05:08.326609    6408 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2082,"bootTime":1701451826,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:05:08.326700    6408 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:05:08.334825    6408 out.go:177] * [functional-149000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1201 10:05:08.345745    6408 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:05:08.342758    6408 notify.go:220] Checking for updates...
	I1201 10:05:08.352730    6408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:05:08.358682    6408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:05:08.366672    6408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:05:08.374715    6408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:05:08.381675    6408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:05:08.386099    6408 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:05:08.386415    6408 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:05:08.390583    6408 out.go:177] * Using the qemu2 driver based on existing profile
	I1201 10:05:08.397732    6408 start.go:298] selected driver: qemu2
	I1201 10:05:08.397738    6408 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:05:08.397817    6408 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:05:08.404743    6408 out.go:177] 
	W1201 10:05:08.408754    6408 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 10:05:08.411694    6408 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-149000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-149000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (129.089042ms)

                                                
                                                
-- stdout --
	* [functional-149000] minikube v1.32.0 sur Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 10:05:08.588092    6419 out.go:296] Setting OutFile to fd 1 ...
	I1201 10:05:08.588219    6419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:08.588222    6419 out.go:309] Setting ErrFile to fd 2...
	I1201 10:05:08.588225    6419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1201 10:05:08.588348    6419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17703-5375/.minikube/bin
	I1201 10:05:08.589572    6419 out.go:303] Setting JSON to false
	I1201 10:05:08.606059    6419 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2082,"bootTime":1701451826,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1201 10:05:08.606158    6419 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1201 10:05:08.610692    6419 out.go:177] * [functional-149000] minikube v1.32.0 sur Darwin 14.1.2 (arm64)
	I1201 10:05:08.620700    6419 out.go:177]   - MINIKUBE_LOCATION=17703
	I1201 10:05:08.616756    6419 notify.go:220] Checking for updates...
	I1201 10:05:08.628739    6419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	I1201 10:05:08.636750    6419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1201 10:05:08.644701    6419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 10:05:08.648500    6419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	I1201 10:05:08.655793    6419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 10:05:08.657755    6419 config.go:182] Loaded profile config "functional-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1201 10:05:08.658020    6419 driver.go:392] Setting default libvirt URI to qemu:///system
	I1201 10:05:08.661699    6419 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1201 10:05:08.668686    6419 start.go:298] selected driver: qemu2
	I1201 10:05:08.668694    6419 start.go:902] validating driver "qemu2" against &{Name:functional-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-149000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1201 10:05:08.668807    6419 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 10:05:08.675765    6419 out.go:177] 
	W1201 10:05:08.677377    6419 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 10:05:08.681716    6419 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.420984917s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-149000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image rm gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-149000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 image save --daemon gcr.io/google-containers/addon-resizer:functional-149000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-149000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "71.675334ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "35.700833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "70.242583ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "35.669166ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012935667s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-149000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-149000
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-149000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-149000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-831000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-730000 --output=json --user=testUser
--- PASS: TestJSONOutput/stop/Command (0.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-125000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-125000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (119.019375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6bceda1-aa69-441f-b5d9-34ac8d57a220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-125000] minikube v1.32.0 on Darwin 14.1.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae1a189f-7515-4472-a233-6c83760aa260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17703"}}
	{"specversion":"1.0","id":"bdebc3e5-6580-487a-921c-a784cd0f476e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig"}}
	{"specversion":"1.0","id":"3297a2f3-0ca9-4d45-919d-09d325c04d15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7e793ac8-9330-4d14-bb0b-a3942c00f4a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0ab49dc-2425-474d-8485-6dd314660ce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube"}}
	{"specversion":"1.0","id":"4b0bb3e8-a065-4165-bb09-29d778ae6f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e60b82c-32e3-4f04-acc5-c57d249b8e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-125000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-125000
--- PASS: TestErrorJSONOutput (0.34s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-945000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.816083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-945000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17703
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17703-5375/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17703-5375/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-945000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-945000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.653292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-945000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-945000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-945000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-945000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (49.618417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-945000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-277000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-277000 -n old-k8s-version-277000: exit status 7 (32.118375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-277000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-322000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (36.922625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-322000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-920000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-920000 -n embed-certs-920000: exit status 7 (32.157291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-920000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-409000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-409000 -n default-k8s-diff-port-409000: exit status 7 (30.945666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-409000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-205000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-205000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (32.42575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-205000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/247)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1010200544/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701453872087733000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1010200544/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701453872087733000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1010200544/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701453872087733000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1010200544/001/test-1701453872087733000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (56.4725ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (92.804584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (98.216792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (96.001667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (96.035666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (107.43025ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (107.6375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo umount -f /mount-9p": exit status 89 (56.327708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1010200544/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3529444563/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (62.733333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.910125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (90.210625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (87.346792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (100.973209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (96.708209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (99.732167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "sudo umount -f /mount-9p": exit status 89 (49.753834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-149000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3529444563/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (87.1165ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (94.332875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (94.411167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (91.851042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (101.107333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (94.941792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-149000 ssh "findmnt -T" /mount1: exit status 89 (106.476583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-149000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-149000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1121408218/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.62s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-384000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-384000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-384000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-384000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-384000"

                                                
                                                
----------------------- debugLogs end: cilium-384000 [took: 2.399289583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-384000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-384000
--- SKIP: TestNetworkPlugins/group/cilium (2.63s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-918000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-918000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard