Test Report: QEMU_macOS 16543

                    
                      1d8c8d61bd1d0bdb169313beec8b7528c236b134:2023-05-20:29342
                    
                

Test fail (86/242)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 35.66
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.01
22 TestAddons/Setup 44.81
23 TestCertOptions 10.05
24 TestCertExpiration 195.3
25 TestDockerFlags 10.07
26 TestForceSystemdFlag 11.78
27 TestForceSystemdEnv 10.03
70 TestFunctional/parallel/ServiceCmdConnect 36.62
137 TestImageBuild/serial/BuildWithBuildArg 1.14
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 54.29
181 TestMountStart/serial/StartWithMountFirst 10.43
184 TestMultiNode/serial/FreshStart2Nodes 9.91
185 TestMultiNode/serial/DeployApp2Nodes 101.42
186 TestMultiNode/serial/PingHostFrom2Pods 0.08
187 TestMultiNode/serial/AddNode 0.07
188 TestMultiNode/serial/ProfileList 0.11
189 TestMultiNode/serial/CopyFile 0.06
190 TestMultiNode/serial/StopNode 0.13
191 TestMultiNode/serial/StartAfterStop 0.1
192 TestMultiNode/serial/RestartKeepsNodes 5.37
193 TestMultiNode/serial/DeleteNode 0.1
194 TestMultiNode/serial/StopMultiNode 0.15
195 TestMultiNode/serial/RestartMultiNode 5.26
196 TestMultiNode/serial/ValidateNameConflict 19.92
200 TestPreload 10.01
202 TestScheduledStopUnix 9.95
203 TestSkaffold 18.15
206 TestRunningBinaryUpgrade 139.91
208 TestKubernetesUpgrade 15.24
221 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.39
222 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.22
223 TestStoppedBinaryUpgrade/Setup 133.59
225 TestPause/serial/Start 9.74
235 TestNoKubernetes/serial/StartWithK8s 9.79
236 TestNoKubernetes/serial/StartWithStopK8s 5.46
237 TestNoKubernetes/serial/Start 5.46
241 TestNoKubernetes/serial/StartNoArgs 5.47
243 TestNetworkPlugins/group/auto/Start 9.88
244 TestNetworkPlugins/group/kindnet/Start 9.78
245 TestNetworkPlugins/group/calico/Start 9.82
246 TestNetworkPlugins/group/custom-flannel/Start 9.75
247 TestNetworkPlugins/group/false/Start 9.84
248 TestNetworkPlugins/group/enable-default-cni/Start 9.72
249 TestNetworkPlugins/group/flannel/Start 9.85
250 TestStoppedBinaryUpgrade/Upgrade 2.61
251 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
252 TestNetworkPlugins/group/bridge/Start 9.7
253 TestNetworkPlugins/group/kubenet/Start 10.35
255 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
257 TestStartStop/group/no-preload/serial/FirstStart 9.89
258 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
259 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
262 TestStartStop/group/old-k8s-version/serial/SecondStart 6.95
263 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
264 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
265 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
266 TestStartStop/group/old-k8s-version/serial/Pause 0.1
268 TestStartStop/group/embed-certs/serial/FirstStart 11.4
269 TestStartStop/group/no-preload/serial/DeployApp 0.1
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
273 TestStartStop/group/no-preload/serial/SecondStart 7.03
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
277 TestStartStop/group/no-preload/serial/Pause 0.1
279 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.09
280 TestStartStop/group/embed-certs/serial/DeployApp 0.1
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
284 TestStartStop/group/embed-certs/serial/SecondStart 7.07
285 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
286 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
287 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
288 TestStartStop/group/embed-certs/serial/Pause 0.1
290 TestStartStop/group/newest-cni/serial/FirstStart 11.27
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.96
296 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
304 TestStartStop/group/newest-cni/serial/SecondStart 5.25
307 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (35.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (35.657077834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d750825-b11b-4b12-8e04-8c621cc788e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-819000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7ba4a69-f665-4c28-8a6b-52f6226e84b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16543"}}
	{"specversion":"1.0","id":"e4f79a31-40dd-4f37-9323-23e6888abdb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig"}}
	{"specversion":"1.0","id":"c1995b4e-ccc8-4f78-8d4c-f17ff21ea8da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e77d1d44-c8d7-43ff-b766-85de269c4c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed007353-3bb2-496f-b025-a52925e45c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube"}}
	{"specversion":"1.0","id":"3cf3be1b-e861-4b19-89c9-c2e1e77c83a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"b3d4a129-58b6-41c0-a06e-23ae7560870d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"75c27f40-e739-4979-bda7-ec0c7ed3b0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2535989e-6fec-4d54-80c8-7f5d688aab50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1c700c2-abf5-4f8e-9d81-60cdc74b4fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-819000 in cluster download-only-819000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1828de7-28fe-4348-8835-40ed4eee4724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a602f21b-7c76-4456-8564-ee11b09074ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8] Decompressors:map[bz2:0x14000056a28 gz:0x14000056a80 tar:0x14000056a30 tar.bz2:0x14000056a40 tar.gz:0x14000056a50 tar.xz:0x14000056a60 tar.zst:0x14000056a70 tbz2:0x14000056a40 tgz:0x140000
56a50 txz:0x14000056a60 tzst:0x14000056a70 xz:0x14000056a88 zip:0x14000056a90 zst:0x14000056aa0] Getters:map[file:0x140005a2c70 http:0x140009d6190 https:0x140009d61e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"7f05ff83-5f22-47a5-8b66-d8b8dcf7ba52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:03:43.095340    1442 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:03:43.095468    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:03:43.095471    1442 out.go:309] Setting ErrFile to fd 2...
	I0520 08:03:43.095473    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:03:43.095576    1442 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	W0520 08:03:43.095700    1442 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: no such file or directory
	I0520 08:03:43.096871    1442 out.go:303] Setting JSON to true
	I0520 08:03:43.113839    1442 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":194,"bootTime":1684594829,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:03:43.113901    1442 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:03:43.119826    1442 out.go:97] [download-only-819000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:03:43.119967    1442 notify.go:220] Checking for updates...
	I0520 08:03:43.122768    1442 out.go:169] MINIKUBE_LOCATION=16543
	W0520 08:03:43.120062    1442 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 08:03:43.131853    1442 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:03:43.135793    1442 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:03:43.138802    1442 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:03:43.141847    1442 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	W0520 08:03:43.147820    1442 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 08:03:43.148040    1442 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:03:43.152875    1442 out.go:97] Using the qemu2 driver based on user configuration
	I0520 08:03:43.152898    1442 start.go:295] selected driver: qemu2
	I0520 08:03:43.152913    1442 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:03:43.152960    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:03:43.156798    1442 out.go:169] Automatically selected the socket_vmnet network
	I0520 08:03:43.162273    1442 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 08:03:43.162343    1442 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:03:43.162367    1442 cni.go:84] Creating CNI manager for ""
	I0520 08:03:43.162383    1442 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:03:43.162388    1442 start_flags.go:319] config:
	{Name:download-only-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:03:43.162551    1442 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:03:43.166848    1442 out.go:97] Downloading VM boot image ...
	I0520 08:03:43.166864    1442 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso
	I0520 08:03:58.666525    1442 out.go:97] Starting control plane node download-only-819000 in cluster download-only-819000
	I0520 08:03:58.666550    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:03:58.783564    1442 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:03:58.783651    1442 cache.go:57] Caching tarball of preloaded images
	I0520 08:03:58.783907    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:03:58.790013    1442 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0520 08:03:58.790024    1442 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:03:59.004399    1442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:04:17.309003    1442 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:17.309128    1442 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:17.955488    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0520 08:04:17.955663    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/download-only-819000/config.json ...
	I0520 08:04:17.955687    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/download-only-819000/config.json: {Name:mkac2b33f86d40d978ccd26f5df89d7fe7d6cc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:17.955943    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:04:17.956121    1442 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0520 08:04:18.678505    1442 out.go:169] 
	W0520 08:04:18.683611    1442 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8] Decompressors:map[bz2:0x14000056a28 gz:0x14000056a80 tar:0x14000056a30 tar.bz2:0x14000056a40 tar.gz:0x14000056a50 tar.xz:0x14000056a60 tar.zst:0x14000056a70 tbz2:0x14000056a40 tgz:0x14000056a50 txz:0x14000056a60 tzst:0x14000056a70 xz:0x14000056a88 zip:0x14000056a90 zst:0x14000056aa0] Getters:map[file:0x140005a2c70 http:0x140009d6190 https:0x140009d61e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 08:04:18.683645    1442 out_reason.go:110] 
	W0520 08:04:18.691481    1442 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:04:18.696487    1442 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-819000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (35.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-820000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-820000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.872031583s)

                                                
                                                
-- stdout --
	* [offline-docker-820000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-820000 in cluster offline-docker-820000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-820000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:18:31.470117    3207 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:18:31.470244    3207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:31.470247    3207 out.go:309] Setting ErrFile to fd 2...
	I0520 08:18:31.470249    3207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:31.470318    3207 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:18:31.471363    3207 out.go:303] Setting JSON to false
	I0520 08:18:31.488608    3207 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1082,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:18:31.488693    3207 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:18:31.493760    3207 out.go:177] * [offline-docker-820000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:18:31.501691    3207 notify.go:220] Checking for updates...
	I0520 08:18:31.505462    3207 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:18:31.508708    3207 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:18:31.511687    3207 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:18:31.514757    3207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:18:31.517683    3207 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:18:31.520776    3207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:18:31.524033    3207 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:31.524064    3207 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:18:31.527659    3207 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:18:31.534649    3207 start.go:295] selected driver: qemu2
	I0520 08:18:31.534658    3207 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:18:31.534665    3207 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:18:31.536447    3207 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:18:31.539583    3207 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:18:31.542775    3207 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:18:31.542792    3207 cni.go:84] Creating CNI manager for ""
	I0520 08:18:31.542801    3207 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:18:31.542805    3207 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:18:31.542813    3207 start_flags.go:319] config:
	{Name:offline-docker-820000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:18:31.542895    3207 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:18:31.548646    3207 out.go:177] * Starting control plane node offline-docker-820000 in cluster offline-docker-820000
	I0520 08:18:31.552675    3207 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:18:31.552705    3207 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:18:31.552713    3207 cache.go:57] Caching tarball of preloaded images
	I0520 08:18:31.552789    3207 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:18:31.552794    3207 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:18:31.552860    3207 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/offline-docker-820000/config.json ...
	I0520 08:18:31.552871    3207 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/offline-docker-820000/config.json: {Name:mk208e6468b2f2ba25166a25a87763b965d48467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:18:31.553049    3207 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:18:31.553059    3207 start.go:364] acquiring machines lock for offline-docker-820000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:31.553084    3207 start.go:368] acquired machines lock for "offline-docker-820000" in 20.75µs
	I0520 08:18:31.553097    3207 start.go:93] Provisioning new machine with config: &{Name:offline-docker-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:31.553121    3207 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:31.571678    3207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:31.586101    3207 start.go:159] libmachine.API.Create for "offline-docker-820000" (driver="qemu2")
	I0520 08:18:31.586127    3207 client.go:168] LocalClient.Create starting
	I0520 08:18:31.586190    3207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:31.586213    3207 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:31.586224    3207 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:31.586276    3207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:31.586291    3207 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:31.586301    3207 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:31.586618    3207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:31.698699    3207 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:31.853254    3207 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:31.853270    3207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:31.853496    3207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:31.867847    3207 main.go:141] libmachine: STDOUT: 
	I0520 08:18:31.867872    3207 main.go:141] libmachine: STDERR: 
	I0520 08:18:31.867942    3207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2 +20000M
	I0520 08:18:31.875762    3207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:31.875782    3207 main.go:141] libmachine: STDERR: 
	I0520 08:18:31.875815    3207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:31.875824    3207 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:31.875864    3207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:29:bf:ad:b4:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:31.877593    3207 main.go:141] libmachine: STDOUT: 
	I0520 08:18:31.877606    3207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:31.877627    3207 client.go:171] LocalClient.Create took 291.493458ms
	I0520 08:18:33.879413    3207 start.go:128] duration metric: createHost completed in 2.326286084s
	I0520 08:18:33.879437    3207 start.go:83] releasing machines lock for "offline-docker-820000", held for 2.326353292s
	W0520 08:18:33.879474    3207 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:33.890271    3207 out.go:177] * Deleting "offline-docker-820000" in qemu2 ...
	W0520 08:18:33.901891    3207 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:33.901902    3207 start.go:702] Will try again in 5 seconds ...
	I0520 08:18:38.904124    3207 start.go:364] acquiring machines lock for offline-docker-820000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:38.904597    3207 start.go:368] acquired machines lock for "offline-docker-820000" in 376.417µs
	I0520 08:18:38.904731    3207 start.go:93] Provisioning new machine with config: &{Name:offline-docker-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:38.905076    3207 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:38.914796    3207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:38.961906    3207 start.go:159] libmachine.API.Create for "offline-docker-820000" (driver="qemu2")
	I0520 08:18:38.961952    3207 client.go:168] LocalClient.Create starting
	I0520 08:18:38.962143    3207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:38.962209    3207 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:38.962230    3207 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:38.962316    3207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:38.962351    3207 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:38.962366    3207 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:38.962954    3207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:39.096409    3207 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:39.260547    3207 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:39.260560    3207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:39.260740    3207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:39.269594    3207 main.go:141] libmachine: STDOUT: 
	I0520 08:18:39.269612    3207 main.go:141] libmachine: STDERR: 
	I0520 08:18:39.269656    3207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2 +20000M
	I0520 08:18:39.276788    3207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:39.276799    3207 main.go:141] libmachine: STDERR: 
	I0520 08:18:39.276814    3207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:39.276821    3207 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:39.276865    3207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a2:7d:32:8c:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/offline-docker-820000/disk.qcow2
	I0520 08:18:39.278359    3207 main.go:141] libmachine: STDOUT: 
	I0520 08:18:39.278370    3207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:39.278381    3207 client.go:171] LocalClient.Create took 316.424084ms
	I0520 08:18:41.280460    3207 start.go:128] duration metric: createHost completed in 2.37537025s
	I0520 08:18:41.280484    3207 start.go:83] releasing machines lock for "offline-docker-820000", held for 2.375870958s
	W0520 08:18:41.280700    3207 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:41.289849    3207 out.go:177] 
	W0520 08:18:41.293775    3207 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:18:41.293782    3207 out.go:239] * 
	* 
	W0520 08:18:41.294566    3207 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:18:41.303747    3207 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-820000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-05-20 08:18:41.31448 -0700 PDT m=+898.303305293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-820000 -n offline-docker-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-820000 -n offline-docker-820000: exit status 7 (33.932834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-820000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-820000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (44.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-862000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-862000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.804565417s)

                                                
                                                
-- stdout --
	* [addons-862000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-862000 in cluster addons-862000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	  - Using image docker.io/registry:2.8.1
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	* Verifying ingress addon...
	* Verifying csi-hostpath-driver addon...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:04:36.692014    1530 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:04:36.692153    1530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:04:36.692156    1530 out.go:309] Setting ErrFile to fd 2...
	I0520 08:04:36.692159    1530 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:04:36.692226    1530 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:04:36.693259    1530 out.go:303] Setting JSON to false
	I0520 08:04:36.708313    1530 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":247,"bootTime":1684594829,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:04:36.708381    1530 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:04:36.713250    1530 out.go:177] * [addons-862000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:04:36.724282    1530 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:04:36.720270    1530 notify.go:220] Checking for updates...
	I0520 08:04:36.731263    1530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:04:36.735287    1530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:04:36.736390    1530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:04:36.744262    1530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:04:36.751206    1530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:04:36.755382    1530 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:04:36.759171    1530 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:04:36.766208    1530 start.go:295] selected driver: qemu2
	I0520 08:04:36.766214    1530 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:04:36.766220    1530 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:04:36.768244    1530 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:04:36.771248    1530 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:04:36.774360    1530 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:04:36.774379    1530 cni.go:84] Creating CNI manager for ""
	I0520 08:04:36.774385    1530 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:04:36.774389    1530 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:04:36.774396    1530 start_flags.go:319] config:
	{Name:addons-862000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-862000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:04:36.774470    1530 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:04:36.783213    1530 out.go:177] * Starting control plane node addons-862000 in cluster addons-862000
	I0520 08:04:36.787242    1530 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:36.787276    1530 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:04:36.787286    1530 cache.go:57] Caching tarball of preloaded images
	I0520 08:04:36.787338    1530 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:04:36.787344    1530 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:04:36.787564    1530 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/config.json ...
	I0520 08:04:36.787580    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/config.json: {Name:mkcbd7ff58d615780328dec3987ea9d1c56c3333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:36.787807    1530 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:04:36.787819    1530 start.go:364] acquiring machines lock for addons-862000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:04:36.787901    1530 start.go:368] acquired machines lock for "addons-862000" in 76.334µs
	I0520 08:04:36.787918    1530 start.go:93] Provisioning new machine with config: &{Name:addons-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-862000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:04:36.787949    1530 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:04:36.796266    1530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 08:04:37.162435    1530 start.go:159] libmachine.API.Create for "addons-862000" (driver="qemu2")
	I0520 08:04:37.162463    1530 client.go:168] LocalClient.Create starting
	I0520 08:04:37.162608    1530 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:04:37.282973    1530 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:04:37.658425    1530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:04:37.988944    1530 main.go:141] libmachine: Creating SSH key...
	I0520 08:04:38.050262    1530 main.go:141] libmachine: Creating Disk image...
	I0520 08:04:38.050271    1530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:04:38.050502    1530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2
	I0520 08:04:38.132087    1530 main.go:141] libmachine: STDOUT: 
	I0520 08:04:38.132118    1530 main.go:141] libmachine: STDERR: 
	I0520 08:04:38.132178    1530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2 +20000M
	I0520 08:04:38.139601    1530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:04:38.139614    1530 main.go:141] libmachine: STDERR: 
	I0520 08:04:38.139632    1530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2
	I0520 08:04:38.139637    1530 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:04:38.139685    1530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b1:60:c3:d2:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/disk.qcow2
	I0520 08:04:38.278993    1530 main.go:141] libmachine: STDOUT: 
	I0520 08:04:38.279014    1530 main.go:141] libmachine: STDERR: 
	I0520 08:04:38.279019    1530 main.go:141] libmachine: Attempt 0
	I0520 08:04:38.279034    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:40.280234    1530 main.go:141] libmachine: Attempt 1
	I0520 08:04:40.280378    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:42.281519    1530 main.go:141] libmachine: Attempt 2
	I0520 08:04:42.281545    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:44.282604    1530 main.go:141] libmachine: Attempt 3
	I0520 08:04:44.282616    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:46.283681    1530 main.go:141] libmachine: Attempt 4
	I0520 08:04:46.283711    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:48.284842    1530 main.go:141] libmachine: Attempt 5
	I0520 08:04:48.284889    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:50.286131    1530 main.go:141] libmachine: Attempt 6
	I0520 08:04:50.286204    1530 main.go:141] libmachine: Searching for 36:b1:60:c3:d2:35 in /var/db/dhcpd_leases ...
	I0520 08:04:50.286680    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0520 08:04:50.286780    1530 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:04:50.286805    1530 main.go:141] libmachine: Found match: 36:b1:60:c3:d2:35
	I0520 08:04:50.286886    1530 main.go:141] libmachine: IP: 192.168.105.2
	I0520 08:04:50.287056    1530 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0520 08:04:52.307461    1530 machine.go:88] provisioning docker machine ...
	I0520 08:04:52.307562    1530 buildroot.go:166] provisioning hostname "addons-862000"
	I0520 08:04:52.308415    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:52.309462    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:52.309480    1530 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-862000 && echo "addons-862000" | sudo tee /etc/hostname
	I0520 08:04:52.405463    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-862000
	
	I0520 08:04:52.405571    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:52.406079    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:52.406093    1530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-862000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-862000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-862000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 08:04:52.484459    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 08:04:52.484479    1530 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16543-1012/.minikube CaCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16543-1012/.minikube}
	I0520 08:04:52.484504    1530 buildroot.go:174] setting up certificates
	I0520 08:04:52.484534    1530 provision.go:83] configureAuth start
	I0520 08:04:52.484540    1530 provision.go:138] copyHostCerts
	I0520 08:04:52.484707    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem (1082 bytes)
	I0520 08:04:52.485820    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem (1123 bytes)
	I0520 08:04:52.486159    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem (1679 bytes)
	I0520 08:04:52.486430    1530 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem org=jenkins.addons-862000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-862000]
	I0520 08:04:52.578073    1530 provision.go:172] copyRemoteCerts
	I0520 08:04:52.578149    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 08:04:52.578166    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:04:52.613828    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 08:04:52.621127    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 08:04:52.628198    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 08:04:52.635304    1530 provision.go:86] duration metric: configureAuth took 150.764167ms
	I0520 08:04:52.635312    1530 buildroot.go:189] setting minikube options for container-runtime
	I0520 08:04:52.635673    1530 config.go:182] Loaded profile config "addons-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:04:52.635708    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:52.635918    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:52.635923    1530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 08:04:52.700111    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 08:04:52.700119    1530 buildroot.go:70] root file system type: tmpfs
	I0520 08:04:52.700173    1530 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 08:04:52.700221    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:52.700462    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:52.700503    1530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 08:04:52.771521    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 08:04:52.771582    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:52.771848    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:52.771861    1530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 08:04:53.104177    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 08:04:53.104191    1530 machine.go:91] provisioned docker machine in 796.700458ms
	I0520 08:04:53.104197    1530 client.go:171] LocalClient.Create took 15.941758s
	I0520 08:04:53.104215    1530 start.go:167] duration metric: libmachine.API.Create for "addons-862000" took 15.941811959s
	I0520 08:04:53.104219    1530 start.go:300] post-start starting for "addons-862000" (driver="qemu2")
	I0520 08:04:53.104222    1530 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 08:04:53.104287    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 08:04:53.104298    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:04:53.140639    1530 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 08:04:53.142026    1530 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 08:04:53.142039    1530 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/addons for local assets ...
	I0520 08:04:53.142116    1530 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/files for local assets ...
	I0520 08:04:53.142143    1530 start.go:303] post-start completed in 37.921208ms
	I0520 08:04:53.142530    1530 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/config.json ...
	I0520 08:04:53.142692    1530 start.go:128] duration metric: createHost completed in 16.354766875s
	I0520 08:04:53.142745    1530 main.go:141] libmachine: Using SSH client type: native
	I0520 08:04:53.142985    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1047f46d0] 0x1047f7130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0520 08:04:53.142997    1530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 08:04:53.207330    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684595093.449783169
	
	I0520 08:04:53.207337    1530 fix.go:207] guest clock: 1684595093.449783169
	I0520 08:04:53.207340    1530 fix.go:220] Guest: 2023-05-20 08:04:53.449783169 -0700 PDT Remote: 2023-05-20 08:04:53.142696 -0700 PDT m=+16.468728126 (delta=307.087169ms)
	I0520 08:04:53.207350    1530 fix.go:191] guest clock delta is within tolerance: 307.087169ms
	I0520 08:04:53.207353    1530 start.go:83] releasing machines lock for "addons-862000", held for 16.41947525s
	I0520 08:04:53.207621    1530 ssh_runner.go:195] Run: cat /version.json
	I0520 08:04:53.207636    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:04:53.207639    1530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 08:04:53.207673    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:04:53.285271    1530 ssh_runner.go:195] Run: systemctl --version
	I0520 08:04:53.287345    1530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 08:04:53.289302    1530 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 08:04:53.289335    1530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 08:04:53.294577    1530 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 08:04:53.294598    1530 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:53.294670    1530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:04:53.306341    1530 docker.go:633] Got preloaded images: 
	I0520 08:04:53.306350    1530 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0520 08:04:53.306404    1530 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:04:53.309834    1530 ssh_runner.go:195] Run: which lz4
	I0520 08:04:53.311263    1530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 08:04:53.312564    1530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 08:04:53.312578    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0520 08:04:54.621671    1530 docker.go:597] Took 1.310454 seconds to copy over tarball
	I0520 08:04:54.621728    1530 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 08:04:55.748735    1530 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.126987917s)
	I0520 08:04:55.748757    1530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 08:04:55.763774    1530 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:04:55.767193    1530 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0520 08:04:55.772816    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:04:55.838184    1530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:04:57.736685    1530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.898482708s)
	I0520 08:04:57.736712    1530 start.go:481] detecting cgroup driver to use...
	I0520 08:04:57.736819    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:04:57.742793    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 08:04:57.747058    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 08:04:57.750305    1530 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 08:04:57.750336    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 08:04:57.753452    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:04:57.756200    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 08:04:57.759225    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:04:57.762593    1530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 08:04:57.766224    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 08:04:57.769387    1530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 08:04:57.772105    1530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 08:04:57.775126    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:04:57.833336    1530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 08:04:57.842674    1530 start.go:481] detecting cgroup driver to use...
	I0520 08:04:57.842737    1530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 08:04:57.849274    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:04:57.854131    1530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 08:04:57.860020    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:04:57.864576    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:04:57.868869    1530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 08:04:57.906638    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:04:57.911972    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:04:57.917186    1530 ssh_runner.go:195] Run: which cri-dockerd
	I0520 08:04:57.918418    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 08:04:57.921032    1530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 08:04:57.925403    1530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 08:04:57.989186    1530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 08:04:58.048854    1530 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 08:04:58.048867    1530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0520 08:04:58.054486    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:04:58.117123    1530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:04:59.273494    1530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156353667s)
	I0520 08:04:59.273560    1530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 08:04:59.333742    1530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 08:04:59.388895    1530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 08:04:59.456178    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:04:59.517680    1530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 08:04:59.524376    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:04:59.590163    1530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0520 08:04:59.614393    1530 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 08:04:59.614485    1530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 08:04:59.617434    1530 start.go:549] Will wait 60s for crictl version
	I0520 08:04:59.617464    1530 ssh_runner.go:195] Run: which crictl
	I0520 08:04:59.618901    1530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 08:04:59.636700    1530 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0520 08:04:59.636779    1530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:04:59.645835    1530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:04:59.660513    1530 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0520 08:04:59.660690    1530 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0520 08:04:59.662257    1530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:04:59.666344    1530 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:59.666385    1530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:04:59.675123    1530 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 08:04:59.675133    1530 docker.go:563] Images already preloaded, skipping extraction
	I0520 08:04:59.675190    1530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:04:59.682655    1530 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 08:04:59.682664    1530 cache_images.go:84] Images are preloaded, skipping loading
	I0520 08:04:59.682729    1530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 08:04:59.692609    1530 cni.go:84] Creating CNI manager for ""
	I0520 08:04:59.692618    1530 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:04:59.692640    1530 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0520 08:04:59.692650    1530 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-862000 NodeName:addons-862000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 08:04:59.692725    1530 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-862000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 08:04:59.692761    1530 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-862000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-862000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0520 08:04:59.692830    1530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0520 08:04:59.695719    1530 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 08:04:59.695757    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 08:04:59.698780    1530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0520 08:04:59.703975    1530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 08:04:59.708896    1530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0520 08:04:59.714197    1530 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0520 08:04:59.715597    1530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:04:59.719309    1530 certs.go:56] Setting up /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000 for IP: 192.168.105.2
	I0520 08:04:59.719319    1530 certs.go:190] acquiring lock for shared ca certs: {Name:mk455286e32296d088c043e2094c607a8fa5e5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:59.719475    1530 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key
	I0520 08:04:59.919677    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt ...
	I0520 08:04:59.919686    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt: {Name:mkde11e542647abf155a6ecaf11a93b8ae50f134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:59.920011    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key ...
	I0520 08:04:59.920015    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key: {Name:mke7ad8e5188ed19b1593564e03e68cdcec673ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:59.920136    1530 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key
	I0520 08:05:00.077432    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt ...
	I0520 08:05:00.077444    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt: {Name:mk0ec5866cff87615bf4188c616a6f3db04cff41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.077758    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key ...
	I0520 08:05:00.077761    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key: {Name:mkee8707f153ff3a4b1b2ef28edde187d552f48c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.077916    1530 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.key
	I0520 08:05:00.077942    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.crt with IP's: []
	I0520 08:05:00.237745    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.crt ...
	I0520 08:05:00.237758    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.crt: {Name:mk5e3ac0f803b899c82bad996a1ac17533715dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.237995    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.key ...
	I0520 08:05:00.237998    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/client.key: {Name:mk038bef07dae25d8a0196e7bbe1c46212e3f1c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.238118    1530 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key.96055969
	I0520 08:05:00.238133    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0520 08:05:00.271454    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt.96055969 ...
	I0520 08:05:00.271458    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt.96055969: {Name:mk58cbd6c1dc9f2995a2dc2a3284ba9c44bd31d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.271584    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key.96055969 ...
	I0520 08:05:00.271587    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key.96055969: {Name:mkcc7c4eeeab6e6cc2f8cf3b4feacdb0d9c5a87e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.271687    1530 certs.go:337] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt
	I0520 08:05:00.271778    1530 certs.go:341] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key
	I0520 08:05:00.271854    1530 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.key
	I0520 08:05:00.271864    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.crt with IP's: []
	I0520 08:05:00.404503    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.crt ...
	I0520 08:05:00.404506    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.crt: {Name:mk99417ac74e72598f653edd528374db6ab44984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.404634    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.key ...
	I0520 08:05:00.404637    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.key: {Name:mkb551147fa83b52e694954f6b7eb27ac94a34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:00.404883    1530 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 08:05:00.405359    1530 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem (1082 bytes)
	I0520 08:05:00.405395    1530 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem (1123 bytes)
	I0520 08:05:00.405635    1530 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem (1679 bytes)
	I0520 08:05:00.406074    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0520 08:05:00.414201    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 08:05:00.421463    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 08:05:00.428273    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/addons-862000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 08:05:00.435175    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 08:05:00.442709    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 08:05:00.450026    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 08:05:00.456740    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 08:05:00.463363    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 08:05:00.470493    1530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 08:05:00.476389    1530 ssh_runner.go:195] Run: openssl version
	I0520 08:05:00.478221    1530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 08:05:00.481159    1530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:05:00.482524    1530 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 20 15:04 /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:05:00.482544    1530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:05:00.484313    1530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 08:05:00.487417    1530 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0520 08:05:00.488693    1530 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0520 08:05:00.488732    1530 kubeadm.go:404] StartCluster: {Name:addons-862000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:addons-862000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:05:00.488795    1530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 08:05:00.496274    1530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 08:05:00.499373    1530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 08:05:00.502414    1530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 08:05:00.505128    1530 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 08:05:00.505143    1530 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 08:05:00.527165    1530 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0520 08:05:00.527195    1530 kubeadm.go:322] [preflight] Running pre-flight checks
	I0520 08:05:00.590268    1530 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 08:05:00.590319    1530 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 08:05:00.590383    1530 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 08:05:00.650426    1530 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 08:05:00.654601    1530 out.go:204]   - Generating certificates and keys ...
	I0520 08:05:00.654671    1530 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0520 08:05:00.654709    1530 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0520 08:05:00.929707    1530 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 08:05:01.106245    1530 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0520 08:05:01.435687    1530 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0520 08:05:01.536194    1530 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0520 08:05:01.669100    1530 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0520 08:05:01.669183    1530 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-862000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0520 08:05:01.725093    1530 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0520 08:05:01.725172    1530 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-862000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0520 08:05:01.781342    1530 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 08:05:01.874443    1530 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 08:05:02.169828    1530 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0520 08:05:02.169865    1530 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 08:05:02.455021    1530 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 08:05:02.515614    1530 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 08:05:02.627148    1530 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 08:05:02.692534    1530 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 08:05:02.699203    1530 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 08:05:02.699310    1530 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 08:05:02.699394    1530 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0520 08:05:02.790449    1530 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 08:05:02.797604    1530 out.go:204]   - Booting up control plane ...
	I0520 08:05:02.797676    1530 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 08:05:02.797716    1530 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 08:05:02.797752    1530 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 08:05:02.797795    1530 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 08:05:02.797870    1530 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 08:05:06.800071    1530 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.006033 seconds
	I0520 08:05:06.800310    1530 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 08:05:06.815060    1530 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 08:05:07.330762    1530 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 08:05:07.330859    1530 kubeadm.go:322] [mark-control-plane] Marking the node addons-862000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 08:05:07.838342    1530 kubeadm.go:322] [bootstrap-token] Using token: gv038d.wrz44hy6c2yn94zw
	I0520 08:05:07.842016    1530 out.go:204]   - Configuring RBAC rules ...
	I0520 08:05:07.842082    1530 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 08:05:07.843113    1530 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 08:05:07.847521    1530 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 08:05:07.849703    1530 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 08:05:07.851195    1530 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 08:05:07.852602    1530 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 08:05:07.857403    1530 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 08:05:08.032972    1530 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0520 08:05:08.245768    1530 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0520 08:05:08.246121    1530 kubeadm.go:322] 
	I0520 08:05:08.246155    1530 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0520 08:05:08.246160    1530 kubeadm.go:322] 
	I0520 08:05:08.246204    1530 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0520 08:05:08.246210    1530 kubeadm.go:322] 
	I0520 08:05:08.246226    1530 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0520 08:05:08.246258    1530 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 08:05:08.246300    1530 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 08:05:08.246305    1530 kubeadm.go:322] 
	I0520 08:05:08.246334    1530 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0520 08:05:08.246341    1530 kubeadm.go:322] 
	I0520 08:05:08.246365    1530 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 08:05:08.246367    1530 kubeadm.go:322] 
	I0520 08:05:08.246392    1530 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0520 08:05:08.246426    1530 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 08:05:08.246468    1530 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 08:05:08.246474    1530 kubeadm.go:322] 
	I0520 08:05:08.246519    1530 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 08:05:08.246563    1530 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0520 08:05:08.246568    1530 kubeadm.go:322] 
	I0520 08:05:08.246605    1530 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gv038d.wrz44hy6c2yn94zw \
	I0520 08:05:08.246660    1530 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 \
	I0520 08:05:08.246671    1530 kubeadm.go:322] 	--control-plane 
	I0520 08:05:08.246675    1530 kubeadm.go:322] 
	I0520 08:05:08.246711    1530 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0520 08:05:08.246715    1530 kubeadm.go:322] 
	I0520 08:05:08.246752    1530 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gv038d.wrz44hy6c2yn94zw \
	I0520 08:05:08.246810    1530 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 
	I0520 08:05:08.246933    1530 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 08:05:08.247019    1530 kubeadm.go:322] W0520 15:05:00.832709    1292 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0520 08:05:08.247109    1530 kubeadm.go:322] W0520 15:05:03.034607    1292 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0520 08:05:08.247115    1530 cni.go:84] Creating CNI manager for ""
	I0520 08:05:08.247123    1530 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:05:08.254631    1530 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 08:05:08.258471    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 08:05:08.261641    1530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0520 08:05:08.266443    1530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 08:05:08.266484    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:08.266525    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033 minikube.k8s.io/name=addons-862000 minikube.k8s.io/updated_at=2023_05_20T08_05_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:08.329671    1530 ops.go:34] apiserver oom_adj: -16
	I0520 08:05:08.329674    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:08.864441    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:09.364467    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:09.864544    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:10.364553    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:10.864715    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:11.364682    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:11.864792    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:12.364705    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:12.864530    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:13.364759    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:13.863782    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:14.364517    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:14.864530    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:15.364493    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:15.864563    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:16.364498    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:16.864559    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:17.364513    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:17.863680    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:18.364500    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:18.864512    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:19.364514    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:19.864509    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:20.364527    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:20.864536    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:21.364441    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:05:21.397366    1530 kubeadm.go:1076] duration metric: took 13.130935417s to wait for elevateKubeSystemPrivileges.
	I0520 08:05:21.397380    1530 kubeadm.go:406] StartCluster complete in 20.908685542s
	I0520 08:05:21.397408    1530 settings.go:142] acquiring lock: {Name:mk59154b06c6365bdac4601706e783ce490a045a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:21.397590    1530 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:05:21.397759    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/kubeconfig: {Name:mkf6fa7fb711448995f7c2c1a6e60e631893d6a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:05:21.397974    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 08:05:21.398027    1530 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0520 08:05:21.398071    1530 addons.go:66] Setting volumesnapshots=true in profile "addons-862000"
	I0520 08:05:21.398075    1530 addons.go:66] Setting ingress=true in profile "addons-862000"
	I0520 08:05:21.398078    1530 addons.go:228] Setting addon volumesnapshots=true in "addons-862000"
	I0520 08:05:21.398084    1530 addons.go:228] Setting addon ingress=true in "addons-862000"
	I0520 08:05:21.398084    1530 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-862000"
	I0520 08:05:21.398110    1530 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-862000"
	I0520 08:05:21.398111    1530 addons.go:66] Setting metrics-server=true in profile "addons-862000"
	I0520 08:05:21.398119    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398119    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398126    1530 addons.go:228] Setting addon metrics-server=true in "addons-862000"
	I0520 08:05:21.398132    1530 addons.go:66] Setting storage-provisioner=true in profile "addons-862000"
	I0520 08:05:21.398136    1530 addons.go:228] Setting addon storage-provisioner=true in "addons-862000"
	I0520 08:05:21.398145    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398152    1530 addons.go:66] Setting ingress-dns=true in profile "addons-862000"
	I0520 08:05:21.398156    1530 addons.go:228] Setting addon ingress-dns=true in "addons-862000"
	I0520 08:05:21.398142    1530 addons.go:66] Setting cloud-spanner=true in profile "addons-862000"
	I0520 08:05:21.398175    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398178    1530 addons.go:66] Setting inspektor-gadget=true in profile "addons-862000"
	I0520 08:05:21.398183    1530 addons.go:228] Setting addon inspektor-gadget=true in "addons-862000"
	I0520 08:05:21.398193    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398169    1530 addons.go:66] Setting registry=true in profile "addons-862000"
	I0520 08:05:21.398235    1530 addons.go:228] Setting addon registry=true in "addons-862000"
	I0520 08:05:21.398148    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398214    1530 addons.go:228] Setting addon cloud-spanner=true in "addons-862000"
	I0520 08:05:21.398282    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398127    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398283    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.398220    1530 addons.go:66] Setting gcp-auth=true in profile "addons-862000"
	I0520 08:05:21.398217    1530 addons.go:66] Setting default-storageclass=true in profile "addons-862000"
	I0520 08:05:21.398456    1530 mustload.go:65] Loading cluster: addons-862000
	I0520 08:05:21.398460    1530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-862000"
	I0520 08:05:21.398803    1530 config.go:182] Loaded profile config "addons-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:05:21.398816    1530 config.go:182] Loaded profile config "addons-862000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:05:21.402994    1530 out.go:177] 
	W0520 08:05:21.399160    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399191    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399223    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399226    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399259    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399346    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399443    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.399583    1530 host.go:54] host status for "addons-862000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	W0520 08:05:21.406095    1530 addons.go:274] "addons-862000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406109    1530 addons.go:274] "addons-862000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406120    1530 addons.go:274] "addons-862000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406129    1530 addons.go:274] "addons-862000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406130    1530 addons_storage_classes.go:55] "addons-862000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0520 08:05:21.406134    1530 addons.go:274] "addons-862000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406138    1530 addons.go:274] "addons-862000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406151    1530 addons.go:274] "addons-862000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0520 08:05:21.406165    1530 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	I0520 08:05:21.409987    1530 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 08:05:21.413049    1530 out.go:177]   - Using image docker.io/registry:2.8.1
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/monitor: connect: connection refused
	I0520 08:05:21.413063    1530 addons.go:464] Verifying addon ingress=true in "addons-862000"
	I0520 08:05:21.413057    1530 addons.go:228] Setting addon default-storageclass=true in "addons-862000"
	I0520 08:05:21.413065    1530 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-862000"
	I0520 08:05:21.413054    1530 addons.go:464] Verifying addon metrics-server=true in "addons-862000"
	W0520 08:05:21.413073    1530 out.go:239] * 
	* 
	I0520 08:05:21.430988    1530 host.go:66] Checking if "addons-862000" exists ...
	I0520 08:05:21.431031    1530 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0520 08:05:21.435034    1530 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0520 08:05:21.435482    1530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:05:21.440997    1530 out.go:177] * Verifying ingress addon...
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:05:21.441762    1530 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 08:05:21.445557    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 08:05:21.446467    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 08:05:21.446471    1530 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 08:05:21.448961    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:05:21.448977    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 08:05:21.449022    1530 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 08:05:21.455052    1530 out.go:177] 
	I0520 08:05:21.458193    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}
	I0520 08:05:21.458603    1530 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 08:05:21.464022    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 08:05:21.464031    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/addons-862000/id_rsa Username:docker}

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-862000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.81s)

                                                
                                    
x
+
TestCertOptions (10.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-149000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-149000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.76264525s)

                                                
                                                
-- stdout --
	* [cert-options-149000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-149000 in cluster cert-options-149000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-149000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-149000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-149000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.719083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-149000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-149000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-149000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-149000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-149000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.949417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-149000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-149000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-149000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-05-20 08:19:11.467727 -0700 PDT m=+928.456603918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-149000 -n cert-options-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-149000 -n cert-options-149000: exit status 7 (29.096917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-149000
--- FAIL: TestCertOptions (10.05s)

                                                
                                    
x
+
TestCertExpiration (195.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.895563917s)

                                                
                                                
-- stdout --
	* [cert-expiration-950000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-950000 in cluster cert-expiration-950000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-950000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0520 08:19:07.452260    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229569792s)

                                                
                                                
-- stdout --
	* [cert-expiration-950000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-950000 in cluster cert-expiration-950000
	* Restarting existing qemu2 VM for "cert-expiration-950000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-950000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-950000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-950000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-950000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-950000 in cluster cert-expiration-950000
	* Restarting existing qemu2 VM for "cert-expiration-950000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-950000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-950000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-05-20 08:22:11.575407 -0700 PDT m=+1108.562849668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-950000 -n cert-expiration-950000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-950000 -n cert-expiration-950000: exit status 7 (70.029042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-950000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-950000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-950000
--- FAIL: TestCertExpiration (195.30s)

                                                
                                    
x
+
TestDockerFlags (10.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.816293417s)

                                                
                                                
-- stdout --
	* [docker-flags-981000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-981000 in cluster docker-flags-981000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:18:51.508708    3403 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:18:51.508835    3403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:51.508838    3403 out.go:309] Setting ErrFile to fd 2...
	I0520 08:18:51.508840    3403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:51.508906    3403 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:18:51.509985    3403 out.go:303] Setting JSON to false
	I0520 08:18:51.524965    3403 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1102,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:18:51.525029    3403 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:18:51.529204    3403 out.go:177] * [docker-flags-981000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:18:51.537104    3403 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:18:51.537142    3403 notify.go:220] Checking for updates...
	I0520 08:18:51.545033    3403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:18:51.548111    3403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:18:51.551090    3403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:18:51.558064    3403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:18:51.561163    3403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:18:51.564812    3403 config.go:182] Loaded profile config "force-systemd-flag-954000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:51.564924    3403 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:51.564946    3403 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:18:51.573111    3403 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:18:51.576030    3403 start.go:295] selected driver: qemu2
	I0520 08:18:51.576035    3403 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:18:51.576042    3403 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:18:51.578062    3403 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:18:51.582046    3403 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:18:51.585189    3403 start_flags.go:910] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0520 08:18:51.585209    3403 cni.go:84] Creating CNI manager for ""
	I0520 08:18:51.585218    3403 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:18:51.585222    3403 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:18:51.585236    3403 start_flags.go:319] config:
	{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0520 08:18:51.585324    3403 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:18:51.594119    3403 out.go:177] * Starting control plane node docker-flags-981000 in cluster docker-flags-981000
	I0520 08:18:51.598024    3403 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:18:51.598046    3403 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:18:51.598060    3403 cache.go:57] Caching tarball of preloaded images
	I0520 08:18:51.598140    3403 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:18:51.598145    3403 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:18:51.598217    3403 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/docker-flags-981000/config.json ...
	I0520 08:18:51.598229    3403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/docker-flags-981000/config.json: {Name:mkf0465b25e43d8cdc86cd173c97de295b8b797d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:18:51.598455    3403 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:18:51.598466    3403 start.go:364] acquiring machines lock for docker-flags-981000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:51.598494    3403 start.go:368] acquired machines lock for "docker-flags-981000" in 23.875µs
	I0520 08:18:51.598509    3403 start.go:93] Provisioning new machine with config: &{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:51.598536    3403 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:51.607053    3403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:51.624237    3403 start.go:159] libmachine.API.Create for "docker-flags-981000" (driver="qemu2")
	I0520 08:18:51.624275    3403 client.go:168] LocalClient.Create starting
	I0520 08:18:51.624344    3403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:51.624369    3403 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:51.624381    3403 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:51.624433    3403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:51.624449    3403 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:51.624455    3403 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:51.624819    3403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:51.747675    3403 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:51.853956    3403 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:51.853962    3403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:51.854109    3403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:51.862661    3403 main.go:141] libmachine: STDOUT: 
	I0520 08:18:51.862675    3403 main.go:141] libmachine: STDERR: 
	I0520 08:18:51.862758    3403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2 +20000M
	I0520 08:18:51.869950    3403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:51.869961    3403 main.go:141] libmachine: STDERR: 
	I0520 08:18:51.869980    3403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:51.869988    3403 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:51.870022    3403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ff:87:62:c9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:51.871639    3403 main.go:141] libmachine: STDOUT: 
	I0520 08:18:51.871652    3403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:51.871669    3403 client.go:171] LocalClient.Create took 247.388958ms
	I0520 08:18:53.873916    3403 start.go:128] duration metric: createHost completed in 2.275353792s
	I0520 08:18:53.873975    3403 start.go:83] releasing machines lock for "docker-flags-981000", held for 2.27547475s
	W0520 08:18:53.874034    3403 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:53.893228    3403 out.go:177] * Deleting "docker-flags-981000" in qemu2 ...
	W0520 08:18:53.908933    3403 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:53.908965    3403 start.go:702] Will try again in 5 seconds ...
	I0520 08:18:58.911202    3403 start.go:364] acquiring machines lock for docker-flags-981000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:58.911621    3403 start.go:368] acquired machines lock for "docker-flags-981000" in 346.416µs
	I0520 08:18:58.911750    3403 start.go:93] Provisioning new machine with config: &{Name:docker-flags-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:58.912063    3403 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:58.921604    3403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:58.970290    3403 start.go:159] libmachine.API.Create for "docker-flags-981000" (driver="qemu2")
	I0520 08:18:58.970340    3403 client.go:168] LocalClient.Create starting
	I0520 08:18:58.970451    3403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:58.970492    3403 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:58.970522    3403 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:58.970598    3403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:58.970629    3403 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:58.970652    3403 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:58.971544    3403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:59.099412    3403 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:59.235830    3403 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:59.235836    3403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:59.236014    3403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:59.244717    3403 main.go:141] libmachine: STDOUT: 
	I0520 08:18:59.244730    3403 main.go:141] libmachine: STDERR: 
	I0520 08:18:59.244780    3403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2 +20000M
	I0520 08:18:59.251974    3403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:59.251987    3403 main.go:141] libmachine: STDERR: 
	I0520 08:18:59.252010    3403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:59.252018    3403 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:59.252054    3403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4b:03:b2:61:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/docker-flags-981000/disk.qcow2
	I0520 08:18:59.253667    3403 main.go:141] libmachine: STDOUT: 
	I0520 08:18:59.253684    3403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:59.253696    3403 client.go:171] LocalClient.Create took 283.347458ms
	I0520 08:19:01.256005    3403 start.go:128] duration metric: createHost completed in 2.343908167s
	I0520 08:19:01.256085    3403 start.go:83] releasing machines lock for "docker-flags-981000", held for 2.344442667s
	W0520 08:19:01.256731    3403 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:19:01.267275    3403 out.go:177] 
	W0520 08:19:01.271451    3403 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:19:01.271492    3403 out.go:239] * 
	* 
	W0520 08:19:01.274027    3403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:19:01.284416    3403 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-981000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (80.19625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-981000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-981000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.6565ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-981000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-981000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-981000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-05-20 08:19:01.425098 -0700 PDT m=+918.413958001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-981000 -n docker-flags-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-981000 -n docker-flags-981000: exit status 7 (28.081416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-981000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-981000
--- FAIL: TestDockerFlags (10.07s)

                                                
                                    
x
+
TestForceSystemdFlag (11.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-954000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-954000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.567625292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-954000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-954000 in cluster force-systemd-flag-954000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-954000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:18:44.687857    3380 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:18:44.687990    3380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:44.687993    3380 out.go:309] Setting ErrFile to fd 2...
	I0520 08:18:44.687996    3380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:44.688062    3380 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:18:44.689053    3380 out.go:303] Setting JSON to false
	I0520 08:18:44.704048    3380 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1095,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:18:44.704121    3380 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:18:44.710103    3380 out.go:177] * [force-systemd-flag-954000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:18:44.713063    3380 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:18:44.713098    3380 notify.go:220] Checking for updates...
	I0520 08:18:44.715965    3380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:18:44.719948    3380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:18:44.723943    3380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:18:44.727010    3380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:18:44.729963    3380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:18:44.733295    3380 config.go:182] Loaded profile config "force-systemd-env-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:44.733362    3380 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:44.733380    3380 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:18:44.740984    3380 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:18:44.743958    3380 start.go:295] selected driver: qemu2
	I0520 08:18:44.743962    3380 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:18:44.743967    3380 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:18:44.745784    3380 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:18:44.748972    3380 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:18:44.750402    3380 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:18:44.750415    3380 cni.go:84] Creating CNI manager for ""
	I0520 08:18:44.750421    3380 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:18:44.750425    3380 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:18:44.750432    3380 start_flags.go:319] config:
	{Name:force-systemd-flag-954000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-954000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:18:44.750508    3380 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:18:44.758965    3380 out.go:177] * Starting control plane node force-systemd-flag-954000 in cluster force-systemd-flag-954000
	I0520 08:18:44.762981    3380 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:18:44.763005    3380 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:18:44.763019    3380 cache.go:57] Caching tarball of preloaded images
	I0520 08:18:44.763084    3380 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:18:44.763089    3380 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:18:44.763153    3380 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/force-systemd-flag-954000/config.json ...
	I0520 08:18:44.763165    3380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/force-systemd-flag-954000/config.json: {Name:mk30bf16e60d234756352913533ff9cdcdff32bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:18:44.763356    3380 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:18:44.763373    3380 start.go:364] acquiring machines lock for force-systemd-flag-954000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:44.763403    3380 start.go:368] acquired machines lock for "force-systemd-flag-954000" in 25.292µs
	I0520 08:18:44.763419    3380 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-954000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:44.763446    3380 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:44.770939    3380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:44.787081    3380 start.go:159] libmachine.API.Create for "force-systemd-flag-954000" (driver="qemu2")
	I0520 08:18:44.787106    3380 client.go:168] LocalClient.Create starting
	I0520 08:18:44.787166    3380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:44.787187    3380 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:44.787201    3380 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:44.787249    3380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:44.787264    3380 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:44.787272    3380 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:44.787608    3380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:44.902060    3380 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:44.976289    3380 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:44.976295    3380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:44.976441    3380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:44.984866    3380 main.go:141] libmachine: STDOUT: 
	I0520 08:18:44.984878    3380 main.go:141] libmachine: STDERR: 
	I0520 08:18:44.984918    3380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2 +20000M
	I0520 08:18:44.992013    3380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:44.992027    3380 main.go:141] libmachine: STDERR: 
	I0520 08:18:44.992055    3380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:44.992061    3380 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:44.992093    3380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:18:73:bc:43:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:44.993545    3380 main.go:141] libmachine: STDOUT: 
	I0520 08:18:44.993557    3380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:44.993573    3380 client.go:171] LocalClient.Create took 206.461417ms
	I0520 08:18:46.995725    3380 start.go:128] duration metric: createHost completed in 2.232266541s
	I0520 08:18:46.995789    3380 start.go:83] releasing machines lock for "force-systemd-flag-954000", held for 2.2323795s
	W0520 08:18:46.995888    3380 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:47.003320    3380 out.go:177] * Deleting "force-systemd-flag-954000" in qemu2 ...
	W0520 08:18:47.024641    3380 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:47.024673    3380 start.go:702] Will try again in 5 seconds ...
	I0520 08:18:52.026789    3380 start.go:364] acquiring machines lock for force-systemd-flag-954000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:53.874115    3380 start.go:368] acquired machines lock for "force-systemd-flag-954000" in 1.847246542s
	I0520 08:18:53.874292    3380 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-954000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:53.874546    3380 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:53.884174    3380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:53.930775    3380 start.go:159] libmachine.API.Create for "force-systemd-flag-954000" (driver="qemu2")
	I0520 08:18:53.930820    3380 client.go:168] LocalClient.Create starting
	I0520 08:18:53.930945    3380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:53.930998    3380 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:53.931014    3380 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:53.931074    3380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:53.931104    3380 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:53.931119    3380 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:53.931677    3380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:54.050366    3380 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:54.166744    3380 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:54.166751    3380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:54.166897    3380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:54.175737    3380 main.go:141] libmachine: STDOUT: 
	I0520 08:18:54.175750    3380 main.go:141] libmachine: STDERR: 
	I0520 08:18:54.175809    3380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2 +20000M
	I0520 08:18:54.182890    3380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:54.182904    3380 main.go:141] libmachine: STDERR: 
	I0520 08:18:54.182917    3380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:54.182925    3380 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:54.182974    3380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ac:aa:fa:3f:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-flag-954000/disk.qcow2
	I0520 08:18:54.184481    3380 main.go:141] libmachine: STDOUT: 
	I0520 08:18:54.184494    3380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:54.184506    3380 client.go:171] LocalClient.Create took 253.679042ms
	I0520 08:18:56.186717    3380 start.go:128] duration metric: createHost completed in 2.312131292s
	I0520 08:18:56.186767    3380 start.go:83] releasing machines lock for "force-systemd-flag-954000", held for 2.312616125s
	W0520 08:18:56.187380    3380 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:56.198192    3380 out.go:177] 
	W0520 08:18:56.203320    3380 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:18:56.203347    3380 out.go:239] * 
	* 
	W0520 08:18:56.205768    3380 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:18:56.216105    3380 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-954000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-954000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-954000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.567292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-954000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-954000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-05-20 08:18:56.309665 -0700 PDT m=+913.298515959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-954000 -n force-systemd-flag-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-954000 -n force-systemd-flag-954000: exit status 7 (32.775875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-954000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-954000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-954000
--- FAIL: TestForceSystemdFlag (11.78s)

                                                
                                    
x
+
TestForceSystemdEnv (10.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-476000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-476000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.812215167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-476000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-476000 in cluster force-systemd-env-476000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-476000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:18:41.480539    3362 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:18:41.480680    3362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:41.480684    3362 out.go:309] Setting ErrFile to fd 2...
	I0520 08:18:41.480687    3362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:18:41.480768    3362 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:18:41.481881    3362 out.go:303] Setting JSON to false
	I0520 08:18:41.498964    3362 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1092,"bootTime":1684594829,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:18:41.499041    3362 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:18:41.502782    3362 out.go:177] * [force-systemd-env-476000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:18:41.510736    3362 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:18:41.513788    3362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:18:41.510763    3362 notify.go:220] Checking for updates...
	I0520 08:18:41.520776    3362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:18:41.524750    3362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:18:41.527816    3362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:18:41.530886    3362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0520 08:18:41.534086    3362 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:18:41.534114    3362 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:18:41.537832    3362 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:18:41.544721    3362 start.go:295] selected driver: qemu2
	I0520 08:18:41.544727    3362 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:18:41.544733    3362 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:18:41.546652    3362 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:18:41.550774    3362 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:18:41.553767    3362 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:18:41.553787    3362 cni.go:84] Creating CNI manager for ""
	I0520 08:18:41.553792    3362 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:18:41.553799    3362 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:18:41.553805    3362 start_flags.go:319] config:
	{Name:force-systemd-env-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:18:41.553878    3362 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:18:41.561771    3362 out.go:177] * Starting control plane node force-systemd-env-476000 in cluster force-systemd-env-476000
	I0520 08:18:41.565740    3362 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:18:41.565763    3362 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:18:41.565772    3362 cache.go:57] Caching tarball of preloaded images
	I0520 08:18:41.565830    3362 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:18:41.565834    3362 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:18:41.565880    3362 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/force-systemd-env-476000/config.json ...
	I0520 08:18:41.565894    3362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/force-systemd-env-476000/config.json: {Name:mkd447c4cd2ce863474944e90c0b78b302504c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:18:41.566082    3362 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:18:41.566091    3362 start.go:364] acquiring machines lock for force-systemd-env-476000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:41.566115    3362 start.go:368] acquired machines lock for "force-systemd-env-476000" in 19.792µs
	I0520 08:18:41.566131    3362 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:41.566154    3362 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:41.573807    3362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:41.590375    3362 start.go:159] libmachine.API.Create for "force-systemd-env-476000" (driver="qemu2")
	I0520 08:18:41.590404    3362 client.go:168] LocalClient.Create starting
	I0520 08:18:41.590485    3362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:41.590522    3362 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:41.590537    3362 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:41.590596    3362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:41.590614    3362 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:41.590626    3362 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:41.591044    3362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:41.732915    3362 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:41.821841    3362 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:41.821852    3362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:41.822037    3362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:41.830747    3362 main.go:141] libmachine: STDOUT: 
	I0520 08:18:41.830761    3362 main.go:141] libmachine: STDERR: 
	I0520 08:18:41.830829    3362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2 +20000M
	I0520 08:18:41.838396    3362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:41.838416    3362 main.go:141] libmachine: STDERR: 
	I0520 08:18:41.838434    3362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:41.838440    3362 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:41.838476    3362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:75:ba:14:9d:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:41.840167    3362 main.go:141] libmachine: STDOUT: 
	I0520 08:18:41.840180    3362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:41.840198    3362 client.go:171] LocalClient.Create took 249.791333ms
	I0520 08:18:43.842419    3362 start.go:128] duration metric: createHost completed in 2.276239s
	I0520 08:18:43.842495    3362 start.go:83] releasing machines lock for "force-systemd-env-476000", held for 2.27637275s
	W0520 08:18:43.842562    3362 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:43.852892    3362 out.go:177] * Deleting "force-systemd-env-476000" in qemu2 ...
	W0520 08:18:43.871343    3362 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:43.871378    3362 start.go:702] Will try again in 5 seconds ...
	I0520 08:18:48.873581    3362 start.go:364] acquiring machines lock for force-systemd-env-476000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:48.874063    3362 start.go:368] acquired machines lock for "force-systemd-env-476000" in 382.542µs
	I0520 08:18:48.874237    3362 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:48.874527    3362 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:48.884228    3362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 08:18:48.931859    3362 start.go:159] libmachine.API.Create for "force-systemd-env-476000" (driver="qemu2")
	I0520 08:18:48.931908    3362 client.go:168] LocalClient.Create starting
	I0520 08:18:48.932022    3362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:48.932068    3362 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:48.932084    3362 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:48.932166    3362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:48.932195    3362 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:48.932208    3362 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:48.932725    3362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:49.060500    3362 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:49.200538    3362 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:49.200544    3362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:49.200705    3362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:49.209708    3362 main.go:141] libmachine: STDOUT: 
	I0520 08:18:49.209723    3362 main.go:141] libmachine: STDERR: 
	I0520 08:18:49.209780    3362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2 +20000M
	I0520 08:18:49.216956    3362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:49.216970    3362 main.go:141] libmachine: STDERR: 
	I0520 08:18:49.216985    3362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:49.216991    3362 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:49.217036    3362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ac:29:05:66:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/force-systemd-env-476000/disk.qcow2
	I0520 08:18:49.218577    3362 main.go:141] libmachine: STDOUT: 
	I0520 08:18:49.218592    3362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:49.218604    3362 client.go:171] LocalClient.Create took 286.689166ms
	I0520 08:18:51.220836    3362 start.go:128] duration metric: createHost completed in 2.346289167s
	I0520 08:18:51.220939    3362 start.go:83] releasing machines lock for "force-systemd-env-476000", held for 2.346841959s
	W0520 08:18:51.221545    3362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:51.230250    3362 out.go:177] 
	W0520 08:18:51.235410    3362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:18:51.235434    3362 out.go:239] * 
	* 
	W0520 08:18:51.239229    3362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:18:51.251272    3362 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-476000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-476000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-476000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (82.198666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-476000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-476000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-05-20 08:18:51.348359 -0700 PDT m=+908.337201376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-476000 -n force-systemd-env-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-476000 -n force-systemd-env-476000: exit status 7 (34.8045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-476000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-476000
--- FAIL: TestForceSystemdEnv (10.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-537000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-537000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-rszsd" [d5deaded-35e0-4666-ae93-d2089778768d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-rszsd" [d5deaded-35e0-4666-ae93-d2089778768d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006706833s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.105.4:30213
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1679: failed to fetch http://192.168.105.4:30213: Get "http://192.168.105.4:30213": dial tcp 192.168.105.4:30213: connect: connection refused
functional_test.go:1596: service test failed - dumping debug information
functional_test.go:1597: -----------------------service failure post-mortem--------------------------------
functional_test.go:1600: (dbg) Run:  kubectl --context functional-537000 describe po hello-node-connect
functional_test.go:1604: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-rszsd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-537000/192.168.105.4
Start Time:       Sat, 20 May 2023 08:09:03 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://ac4e7966822714724c7ee64fd3ff51261b37fd67c2942ed2835f3c6bdf79503c
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 20 May 2023 08:09:16 -0700
Finished:     Sat, 20 May 2023 08:09:16 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlh2q (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-qlh2q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-rszsd to functional-537000
Normal   Pulled     22s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    22s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-rszsd_default(d5deaded-35e0-4666-ae93-d2089778768d)

                                                
                                                
functional_test.go:1606: (dbg) Run:  kubectl --context functional-537000 logs -l app=hello-node-connect
functional_test.go:1610: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1612: (dbg) Run:  kubectl --context functional-537000 describe svc hello-node-connect
functional_test.go:1616: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.197.145
IPs:                      10.110.197.145
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30213/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-537000 -n functional-537000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-537000                                                                                               | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-537000                                                                                               | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-537000 ssh findmnt                                                                                      | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-537000                                                                                               | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-537000 --dry-run                                                                                     | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-537000                                                                                               | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                 | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|           | -p functional-537000                                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:09:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:09:37.565187    2296 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:09:37.565285    2296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.565288    2296 out.go:309] Setting ErrFile to fd 2...
	I0520 08:09:37.565290    2296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.565388    2296 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:09:37.566579    2296 out.go:303] Setting JSON to false
	I0520 08:09:37.582881    2296 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":548,"bootTime":1684594829,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:09:37.582968    2296 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:09:37.587991    2296 out.go:177] * [functional-537000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	I0520 08:09:37.597986    2296 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:09:37.595135    2296 notify.go:220] Checking for updates...
	I0520 08:09:37.605957    2296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:09:37.610897    2296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:09:37.617888    2296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:09:37.625941    2296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:09:37.631905    2296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:09:37.633869    2296 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:09:37.634088    2296 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:09:37.637941    2296 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0520 08:09:37.643918    2296 start.go:295] selected driver: qemu2
	I0520 08:09:37.643923    2296 start.go:870] validating driver "qemu2" against &{Name:functional-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:09:37.643972    2296 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:09:37.649962    2296 out.go:177] 
	W0520 08:09:37.653933    2296 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 08:09:37.657991    2296 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-05-20 15:06:17 UTC, ends at Sat 2023-05-20 15:09:39 UTC. --
	May 20 15:09:20 functional-537000 dockerd[8509]: time="2023-05-20T15:09:20.546088135Z" level=warning msg="cleaning up after shim disconnected" id=4942b89873a0b6d0158862a4b9d3b7156499999d77b131688dd5e9665730caad namespace=moby
	May 20 15:09:20 functional-537000 dockerd[8509]: time="2023-05-20T15:09:20.546092468Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:09:22 functional-537000 dockerd[8509]: time="2023-05-20T15:09:22.527227825Z" level=info msg="shim disconnected" id=0d264daa1da04d46822b1ae6710ea60591bcdafc1bff25b23c1efa9eadb9e771 namespace=moby
	May 20 15:09:22 functional-537000 dockerd[8509]: time="2023-05-20T15:09:22.527263283Z" level=warning msg="cleaning up after shim disconnected" id=0d264daa1da04d46822b1ae6710ea60591bcdafc1bff25b23c1efa9eadb9e771 namespace=moby
	May 20 15:09:22 functional-537000 dockerd[8509]: time="2023-05-20T15:09:22.527268658Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:09:22 functional-537000 dockerd[8503]: time="2023-05-20T15:09:22.527536698Z" level=info msg="ignoring event" container=0d264daa1da04d46822b1ae6710ea60591bcdafc1bff25b23c1efa9eadb9e771 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.524581413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.524645288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.524676746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.524688163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:26 functional-537000 dockerd[8503]: time="2023-05-20T15:09:26.568887791Z" level=info msg="ignoring event" container=c876f19ce53ae4f3abf705471757d8d807671a0b17f1d5520c3d5ce62ad980e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.569075665Z" level=info msg="shim disconnected" id=c876f19ce53ae4f3abf705471757d8d807671a0b17f1d5520c3d5ce62ad980e9 namespace=moby
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.569106665Z" level=warning msg="cleaning up after shim disconnected" id=c876f19ce53ae4f3abf705471757d8d807671a0b17f1d5520c3d5ce62ad980e9 namespace=moby
	May 20 15:09:26 functional-537000 dockerd[8509]: time="2023-05-20T15:09:26.569111415Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.590895263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.590925513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.590931971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.591131929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.591874468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.592068259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.594493084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:09:38 functional-537000 dockerd[8509]: time="2023-05-20T15:09:38.594525584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:09:38 functional-537000 cri-dockerd[9363]: time="2023-05-20T15:09:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2721b9788d536f2b83af2873bd5432118160202c87fa21c023d465859c710d24/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 15:09:38 functional-537000 cri-dockerd[9363]: time="2023-05-20T15:09:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/88b6f35ee17ffaa7ea20704480032a3a13ce7dfbc884bb33a6ad9965e1cd9552/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 20 15:09:38 functional-537000 dockerd[8503]: time="2023-05-20T15:09:38.998985985Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	c876f19ce53ae       72565bf5bbedf                                                                                         13 seconds ago       Exited              echoserver-arm            3                   fbb56259d2bdd
	4942b89873a0b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 seconds ago       Exited              mount-munger              0                   0d264daa1da04
	ac4e796682271       72565bf5bbedf                                                                                         23 seconds ago       Exited              echoserver-arm            2                   91f0becdb243b
	c1a754fb8eda6       nginx@sha256:480868e8c8c797794257e2abd88d0f9a8809b2fe956cbfbc05dcc0bca1f7cd43                         29 seconds ago       Running             myfrontend                0                   fb5f93a07893d
	8802b52f40450       nginx@sha256:02ffd439b71d9ea9408e449b568f65c0bbbb94bebd8750f1d80231ab6496008e                         45 seconds ago       Running             nginx                     0                   8fe8a6c7f9f59
	af9bbb635e741       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   deb5d52e909b8
	066b7686f5336       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   c0c044ba4dc7b
	2821a7d42f0a2       29921a0845422                                                                                         About a minute ago   Running             kube-proxy                3                   2c989b393407a
	0fe2b97a26a21       24bc64e911039                                                                                         About a minute ago   Running             etcd                      3                   8b03549f0519f
	d0c182e9138f6       305d7ed1dae28                                                                                         About a minute ago   Running             kube-scheduler            3                   1210296f6dfe4
	0378d9e565a4b       72c9df6be7f1b                                                                                         About a minute ago   Running             kube-apiserver            0                   3675d56a7ab45
	417f084c3c946       2ee705380c3c5                                                                                         About a minute ago   Running             kube-controller-manager   3                   ac985b3a98731
	f81fed79a1c27       305d7ed1dae28                                                                                         2 minutes ago        Exited              kube-scheduler            2                   1ee2e59ad2f29
	d179f9a255b91       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   f6c2d92cfca41
	16ac5269fc99f       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       2                   480cf7505e591
	6f35b89ec6bfd       24bc64e911039                                                                                         2 minutes ago        Exited              etcd                      2                   d5d899b75f140
	b6f8932e57731       29921a0845422                                                                                         2 minutes ago        Exited              kube-proxy                2                   aef8dfed84ad6
	59532f881219e       2ee705380c3c5                                                                                         2 minutes ago        Exited              kube-controller-manager   2                   4c3ee01bcb8a7
	
	* 
	* ==> coredns [af9bbb635e74] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58871 - 35098 "HINFO IN 9095312138016394971.3490947401665456859. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009358553s
	[INFO] 10.244.0.1:3791 - 28610 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000085624s
	[INFO] 10.244.0.1:44113 - 16870 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000090291s
	[INFO] 10.244.0.1:40107 - 24696 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000933829s
	[INFO] 10.244.0.1:64249 - 37211 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000048084s
	[INFO] 10.244.0.1:57912 - 16422 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000085666s
	[INFO] 10.244.0.1:13504 - 49771 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000098416s
	
	* 
	* ==> coredns [d179f9a255b9] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58513 - 41252 "HINFO IN 5509182939907841389.8822191893046119781. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009031795s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-537000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-537000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033
	                    minikube.k8s.io/name=functional-537000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_20T08_06_35_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 May 2023 15:06:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-537000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 May 2023 15:09:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 May 2023 15:09:18 +0000   Sat, 20 May 2023 15:06:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 May 2023 15:09:18 +0000   Sat, 20 May 2023 15:06:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 May 2023 15:09:18 +0000   Sat, 20 May 2023 15:06:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 May 2023 15:09:18 +0000   Sat, 20 May 2023 15:06:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-537000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 3528df496e654a3e90b7a05b927c97c9
	  System UUID:                3528df496e654a3e90b7a05b927c97c9
	  Boot ID:                    c3243999-0e28-4f47-b715-7331b91b7f13
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-szz9k                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  default                     hello-node-connect-58d66798bb-rszsd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 coredns-5d78c9869d-mnhb2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m51s
	  kube-system                 etcd-functional-537000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m4s
	  kube-system                 kube-apiserver-functional-537000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-functional-537000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 kube-proxy-pwf8l                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-scheduler-functional-537000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-5dd9cbfd69-nzh6s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-77j67         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m50s              kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 2m25s              kube-proxy       
	  Normal  Starting                 3m4s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m4s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m4s               kubelet          Node functional-537000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s               kubelet          Node functional-537000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s               kubelet          Node functional-537000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m2s               kubelet          Node functional-537000 status is now: NodeReady
	  Normal  RegisteredNode           2m52s              node-controller  Node functional-537000 event: Registered Node functional-537000 in Controller
	  Normal  RegisteredNode           2m13s              node-controller  Node functional-537000 event: Registered Node functional-537000 in Controller
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node functional-537000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node functional-537000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node functional-537000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                node-controller  Node functional-537000 event: Registered Node functional-537000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.107505] systemd-fstab-generator[3783]: Ignoring "noauto" for root device
	[  +0.583534] kauditd_printk_skb: 28 callbacks suppressed
	[May20 15:07] systemd-fstab-generator[5349]: Ignoring "noauto" for root device
	[  +0.104265] systemd-fstab-generator[5457]: Ignoring "noauto" for root device
	[  +0.131187] systemd-fstab-generator[5617]: Ignoring "noauto" for root device
	[  +0.153618] systemd-fstab-generator[5740]: Ignoring "noauto" for root device
	[  +0.151299] systemd-fstab-generator[5780]: Ignoring "noauto" for root device
	[ +15.062686] kauditd_printk_skb: 52 callbacks suppressed
	[ +29.589105] systemd-fstab-generator[7746]: Ignoring "noauto" for root device
	[  +0.148925] systemd-fstab-generator[7777]: Ignoring "noauto" for root device
	[  +0.101738] systemd-fstab-generator[7788]: Ignoring "noauto" for root device
	[  +0.088262] systemd-fstab-generator[7801]: Ignoring "noauto" for root device
	[May20 15:08] systemd-fstab-generator[8978]: Ignoring "noauto" for root device
	[  +0.101351] systemd-fstab-generator[9046]: Ignoring "noauto" for root device
	[  +0.113935] systemd-fstab-generator[9187]: Ignoring "noauto" for root device
	[  +0.134594] systemd-fstab-generator[9268]: Ignoring "noauto" for root device
	[  +0.124309] systemd-fstab-generator[9355]: Ignoring "noauto" for root device
	[  +1.087765] systemd-fstab-generator[9737]: Ignoring "noauto" for root device
	[  +3.415301] kauditd_printk_skb: 61 callbacks suppressed
	[ +12.720073] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.099486] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +15.325519] kauditd_printk_skb: 1 callbacks suppressed
	[May20 15:09] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.607575] kauditd_printk_skb: 3 callbacks suppressed
	[ +19.841757] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [0fe2b97a26a2] <==
	* {"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:08:15.402Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:08:16.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-05-20T15:08:16.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-05-20T15:08:16.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-05-20T15:08:16.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-05-20T15:08:16.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-05-20T15:08:16.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-05-20T15:08:16.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-05-20T15:08:16.583Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-537000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-20T15:08:16.583Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:08:16.587Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-05-20T15:08:16.587Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:08:16.589Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-20T15:08:16.593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-20T15:08:16.593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [6f35b89ec6bf] <==
	* {"level":"info","ts":"2023-05-20T15:07:11.973Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-20T15:07:11.974Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-20T15:07:11.974Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-20T15:07:11.974Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:07:11.974Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-05-20T15:07:13.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-05-20T15:07:13.270Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:07:13.270Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:07:13.273Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-05-20T15:07:13.273Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-20T15:07:13.273Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-20T15:07:13.273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-20T15:07:13.270Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-537000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-20T15:07:56.142Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-05-20T15:07:56.142Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-537000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-05-20T15:07:56.154Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-05-20T15:07:56.155Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:07:56.157Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-05-20T15:07:56.157Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-537000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  15:09:39 up 3 min,  0 users,  load average: 0.83, 0.55, 0.23
	Linux functional-537000 5.10.57 #1 SMP PREEMPT Mon May 15 19:29:44 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0378d9e565a4] <==
	* I0520 15:08:17.280532       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0520 15:08:17.296874       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 15:08:17.297076       1 cache.go:39] Caches are synced for autoregister controller
	I0520 15:08:17.297142       1 shared_informer.go:318] Caches are synced for configmaps
	I0520 15:08:17.297165       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 15:08:17.297439       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0520 15:08:17.297484       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0520 15:08:17.297504       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0520 15:08:17.307144       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0520 15:08:18.066160       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 15:08:18.197639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 15:08:18.786564       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0520 15:08:18.789929       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0520 15:08:18.801884       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0520 15:08:18.809911       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 15:08:18.812042       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 15:08:30.244658       1 controller.go:624] quota admission added evaluator for: endpoints
	I0520 15:08:30.461618       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 15:08:39.695451       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0520 15:08:39.749905       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.106.200.145]
	I0520 15:08:51.843606       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.103.212.197]
	I0520 15:09:03.317196       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.110.197.145]
	I0520 15:09:38.125024       1 controller.go:624] quota admission added evaluator for: namespaces
	I0520 15:09:38.187471       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.88.188]
	I0520 15:09:38.220498       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.147.121]
	
	* 
	* ==> kube-controller-manager [417f084c3c94] <==
	* I0520 15:08:39.696896       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0520 15:08:39.707065       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-szz9k"
	I0520 15:08:58.432233       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0520 15:09:03.275360       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0520 15:09:03.278349       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-rszsd"
	I0520 15:09:38.145188       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
	I0520 15:09:38.150497       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0520 15:09:38.153299       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0520 15:09:38.156737       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
	E0520 15:09:38.158643       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0520 15:09:38.158725       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0520 15:09:38.161499       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0520 15:09:38.161831       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0520 15:09:38.161864       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0520 15:09:38.164413       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0520 15:09:38.167313       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0520 15:09:38.167420       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0520 15:09:38.171307       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0520 15:09:38.171317       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0520 15:09:38.171331       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0520 15:09:38.171451       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0520 15:09:38.203383       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-77j67"
	I0520 15:09:38.233172       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-nzh6s"
	W0520 15:09:38.242663       1 endpointslice_controller.go:297] Error syncing endpoint slices for service "kubernetes-dashboard/dashboard-metrics-scraper", retrying. Error: EndpointSlice informer cache is out of date
	I0520 15:09:38.252396       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	
	* 
	* ==> kube-controller-manager [59532f881219] <==
	* I0520 15:07:26.127065       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0520 15:07:26.127105       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0520 15:07:26.127125       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0520 15:07:26.129347       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0520 15:07:26.131177       1 shared_informer.go:318] Caches are synced for deployment
	I0520 15:07:26.131470       1 shared_informer.go:318] Caches are synced for TTL
	I0520 15:07:26.137258       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0520 15:07:26.138356       1 shared_informer.go:318] Caches are synced for daemon sets
	I0520 15:07:26.143926       1 shared_informer.go:318] Caches are synced for PVC protection
	I0520 15:07:26.146958       1 shared_informer.go:318] Caches are synced for ephemeral
	I0520 15:07:26.148008       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0520 15:07:26.160631       1 shared_informer.go:318] Caches are synced for disruption
	I0520 15:07:26.162818       1 shared_informer.go:318] Caches are synced for stateful set
	I0520 15:07:26.167355       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0520 15:07:26.234356       1 shared_informer.go:318] Caches are synced for attach detach
	I0520 15:07:26.254697       1 shared_informer.go:318] Caches are synced for namespace
	I0520 15:07:26.289429       1 shared_informer.go:318] Caches are synced for service account
	I0520 15:07:26.670820       1 shared_informer.go:318] Caches are synced for resource quota
	I0520 15:07:26.671926       1 shared_informer.go:318] Caches are synced for garbage collector
	I0520 15:07:26.675050       1 shared_informer.go:318] Caches are synced for garbage collector
	I0520 15:07:26.675066       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0520 15:07:26.692406       1 shared_informer.go:318] Caches are synced for cronjob
	I0520 15:07:26.701004       1 shared_informer.go:318] Caches are synced for resource quota
	I0520 15:07:26.742276       1 shared_informer.go:318] Caches are synced for job
	I0520 15:07:26.745354       1 shared_informer.go:318] Caches are synced for TTL after finished
	
	* 
	* ==> kube-proxy [2821a7d42f0a] <==
	* I0520 15:08:17.991617       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0520 15:08:17.991655       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0520 15:08:17.991665       1 server_others.go:551] "Using iptables proxy"
	I0520 15:08:18.029175       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0520 15:08:18.029224       1 server_others.go:190] "Using iptables Proxier"
	I0520 15:08:18.029637       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 15:08:18.030813       1 server.go:657] "Version info" version="v1.27.2"
	I0520 15:08:18.031017       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 15:08:18.031375       1 config.go:188] "Starting service config controller"
	I0520 15:08:18.031388       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0520 15:08:18.031399       1 config.go:97] "Starting endpoint slice config controller"
	I0520 15:08:18.031400       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0520 15:08:18.033860       1 config.go:315] "Starting node config controller"
	I0520 15:08:18.033866       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0520 15:08:18.131633       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0520 15:08:18.131633       1 shared_informer.go:318] Caches are synced for service config
	I0520 15:08:18.133891       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b6f8932e5773] <==
	* I0520 15:07:13.987986       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0520 15:07:13.988029       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0520 15:07:13.988042       1 server_others.go:551] "Using iptables proxy"
	I0520 15:07:13.995997       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0520 15:07:13.996007       1 server_others.go:190] "Using iptables Proxier"
	I0520 15:07:13.996019       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 15:07:13.996184       1 server.go:657] "Version info" version="v1.27.2"
	I0520 15:07:13.996189       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 15:07:13.996470       1 config.go:188] "Starting service config controller"
	I0520 15:07:13.996482       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0520 15:07:13.996522       1 config.go:97] "Starting endpoint slice config controller"
	I0520 15:07:13.996525       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0520 15:07:13.996724       1 config.go:315] "Starting node config controller"
	I0520 15:07:13.996727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0520 15:07:14.102511       1 shared_informer.go:318] Caches are synced for node config
	I0520 15:07:14.106485       1 shared_informer.go:318] Caches are synced for service config
	I0520 15:07:14.106720       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d0c182e9138f] <==
	* I0520 15:08:16.062588       1 serving.go:348] Generated self-signed cert in-memory
	W0520 15:08:17.215098       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 15:08:17.215180       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 15:08:17.215217       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 15:08:17.215236       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 15:08:17.239060       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0520 15:08:17.239154       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 15:08:17.240195       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0520 15:08:17.242506       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 15:08:17.242540       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 15:08:17.242561       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 15:08:17.343394       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f81fed79a1c2] <==
	* I0520 15:07:28.744945       1 serving.go:348] Generated self-signed cert in-memory
	I0520 15:07:29.000549       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0520 15:07:29.000563       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 15:07:29.002157       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0520 15:07:29.002195       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0520 15:07:29.002243       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 15:07:29.002256       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 15:07:29.002271       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0520 15:07:29.002290       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0520 15:07:29.002581       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0520 15:07:29.002633       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 15:07:29.103058       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0520 15:07:29.103102       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0520 15:07:29.103112       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 15:07:56.149160       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0520 15:07:56.149176       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0520 15:07:56.149236       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0520 15:07:56.149257       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-05-20 15:06:17 UTC, ends at Sat 2023-05-20 15:09:39 UTC. --
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.738684    9743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/7013467d-4e9d-4eff-aa16-d4b40efaab49-test-volume\") pod \"7013467d-4e9d-4eff-aa16-d4b40efaab49\" (UID: \"7013467d-4e9d-4eff-aa16-d4b40efaab49\") "
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.738734    9743 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6vbz\" (UniqueName: \"kubernetes.io/projected/7013467d-4e9d-4eff-aa16-d4b40efaab49-kube-api-access-q6vbz\") pod \"7013467d-4e9d-4eff-aa16-d4b40efaab49\" (UID: \"7013467d-4e9d-4eff-aa16-d4b40efaab49\") "
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.739037    9743 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7013467d-4e9d-4eff-aa16-d4b40efaab49-test-volume" (OuterVolumeSpecName: "test-volume") pod "7013467d-4e9d-4eff-aa16-d4b40efaab49" (UID: "7013467d-4e9d-4eff-aa16-d4b40efaab49"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.742541    9743 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7013467d-4e9d-4eff-aa16-d4b40efaab49-kube-api-access-q6vbz" (OuterVolumeSpecName: "kube-api-access-q6vbz") pod "7013467d-4e9d-4eff-aa16-d4b40efaab49" (UID: "7013467d-4e9d-4eff-aa16-d4b40efaab49"). InnerVolumeSpecName "kube-api-access-q6vbz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.839782    9743 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q6vbz\" (UniqueName: \"kubernetes.io/projected/7013467d-4e9d-4eff-aa16-d4b40efaab49-kube-api-access-q6vbz\") on node \"functional-537000\" DevicePath \"\""
	May 20 15:09:22 functional-537000 kubelet[9743]: I0520 15:09:22.839810    9743 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/7013467d-4e9d-4eff-aa16-d4b40efaab49-test-volume\") on node \"functional-537000\" DevicePath \"\""
	May 20 15:09:23 functional-537000 kubelet[9743]: I0520 15:09:23.437551    9743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d264daa1da04d46822b1ae6710ea60591bcdafc1bff25b23c1efa9eadb9e771"
	May 20 15:09:26 functional-537000 kubelet[9743]: I0520 15:09:26.455929    9743 scope.go:115] "RemoveContainer" containerID="e0ea0f358f93ae0c781a2a7b0baaa2883fe89f18f604e354afdd5ec686b6b8f2"
	May 20 15:09:27 functional-537000 kubelet[9743]: I0520 15:09:27.507906    9743 scope.go:115] "RemoveContainer" containerID="e0ea0f358f93ae0c781a2a7b0baaa2883fe89f18f604e354afdd5ec686b6b8f2"
	May 20 15:09:27 functional-537000 kubelet[9743]: I0520 15:09:27.508175    9743 scope.go:115] "RemoveContainer" containerID="c876f19ce53ae4f3abf705471757d8d807671a0b17f1d5520c3d5ce62ad980e9"
	May 20 15:09:27 functional-537000 kubelet[9743]: E0520 15:09:27.508307    9743 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-szz9k_default(d7081496-1133-42e4-828f-e5f5f932e285)\"" pod="default/hello-node-7b684b55f9-szz9k" podUID=d7081496-1133-42e4-828f-e5f5f932e285
	May 20 15:09:32 functional-537000 kubelet[9743]: I0520 15:09:32.453896    9743 scope.go:115] "RemoveContainer" containerID="ac4e7966822714724c7ee64fd3ff51261b37fd67c2942ed2835f3c6bdf79503c"
	May 20 15:09:32 functional-537000 kubelet[9743]: E0520 15:09:32.454790    9743 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-rszsd_default(d5deaded-35e0-4666-ae93-d2089778768d)\"" pod="default/hello-node-connect-58d66798bb-rszsd" podUID=d5deaded-35e0-4666-ae93-d2089778768d
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.222317    9743 topology_manager.go:212] "Topology Admit Handler"
	May 20 15:09:38 functional-537000 kubelet[9743]: E0520 15:09:38.222364    9743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7013467d-4e9d-4eff-aa16-d4b40efaab49" containerName="mount-munger"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.222381    9743 memory_manager.go:346] "RemoveStaleState removing state" podUID="7013467d-4e9d-4eff-aa16-d4b40efaab49" containerName="mount-munger"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.239329    9743 topology_manager.go:212] "Topology Admit Handler"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.407215    9743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/99cb3091-f060-4a46-85e7-c7d375c2245d-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-77j67\" (UID: \"99cb3091-f060-4a46-85e7-c7d375c2245d\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-77j67"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.407250    9743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7j2p\" (UniqueName: \"kubernetes.io/projected/5e5057ff-7ee9-407c-989b-bc103b3630f7-kube-api-access-b7j2p\") pod \"dashboard-metrics-scraper-5dd9cbfd69-nzh6s\" (UID: \"5e5057ff-7ee9-407c-989b-bc103b3630f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-nzh6s"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.407265    9743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-265ht\" (UniqueName: \"kubernetes.io/projected/99cb3091-f060-4a46-85e7-c7d375c2245d-kube-api-access-265ht\") pod \"kubernetes-dashboard-5c5cfc8747-77j67\" (UID: \"99cb3091-f060-4a46-85e7-c7d375c2245d\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-77j67"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.407279    9743 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5e5057ff-7ee9-407c-989b-bc103b3630f7-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-nzh6s\" (UID: \"5e5057ff-7ee9-407c-989b-bc103b3630f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-nzh6s"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.452513    9743 scope.go:115] "RemoveContainer" containerID="c876f19ce53ae4f3abf705471757d8d807671a0b17f1d5520c3d5ce62ad980e9"
	May 20 15:09:38 functional-537000 kubelet[9743]: E0520 15:09:38.452678    9743 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-szz9k_default(d7081496-1133-42e4-828f-e5f5f932e285)\"" pod="default/hello-node-7b684b55f9-szz9k" podUID=d7081496-1133-42e4-828f-e5f5f932e285
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.787359    9743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b6f35ee17ffaa7ea20704480032a3a13ce7dfbc884bb33a6ad9965e1cd9552"
	May 20 15:09:38 functional-537000 kubelet[9743]: I0520 15:09:38.789319    9743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2721b9788d536f2b83af2873bd5432118160202c87fa21c023d465859c710d24"
	
	* 
	* ==> storage-provisioner [066b7686f533] <==
	* I0520 15:08:18.033138       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 15:08:18.043833       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 15:08:18.043901       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 15:08:35.443470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 15:08:35.443989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-537000_b012db52-55ee-43ea-b615-507bcc2d26d0!
	I0520 15:08:35.447730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dabb9ffd-171d-4e42-aae9-e1379af2f274", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-537000_b012db52-55ee-43ea-b615-507bcc2d26d0 became leader
	I0520 15:08:35.545694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-537000_b012db52-55ee-43ea-b615-507bcc2d26d0!
	I0520 15:08:58.432232       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0520 15:08:58.432312       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b079cf39-f185-4032-b825-02a8fa54ef51 345 0 2023-05-20 15:06:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-05-20 15:06:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1 690 0 2023-05-20 15:08:58 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-05-20 15:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-05-20 15:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0520 15:08:58.432859       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1" provisioned
	I0520 15:08:58.432879       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0520 15:08:58.432909       1 volume_store.go:212] Trying to save persistentvolume "pvc-62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1"
	I0520 15:08:58.433622       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0520 15:08:58.437334       1 volume_store.go:219] persistentvolume "pvc-62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1" saved
	I0520 15:08:58.437424       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-62ed2ac1-c3c9-42cd-9c76-ce8ea219dcb1
	
	* 
	* ==> storage-provisioner [16ac5269fc99] <==
	* I0520 15:07:12.017895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 15:07:13.978877       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 15:07:13.978900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 15:07:31.394517       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 15:07:31.395523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-537000_24c74b04-0438-4f60-92c0-05d99136db63!
	I0520 15:07:31.396644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dabb9ffd-171d-4e42-aae9-e1379af2f274", APIVersion:"v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-537000_24c74b04-0438-4f60-92c0-05d99136db63 became leader
	I0520 15:07:31.496363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-537000_24c74b04-0438-4f60-92c0-05d99136db63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-537000 -n functional-537000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-537000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5dd9cbfd69-nzh6s kubernetes-dashboard-5c5cfc8747-77j67
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-537000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-nzh6s kubernetes-dashboard-5c5cfc8747-77j67
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-537000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-nzh6s kubernetes-dashboard-5c5cfc8747-77j67: exit status 1 (40.272ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-537000/192.168.105.4
	Start Time:       Sat, 20 May 2023 08:09:17 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://4942b89873a0b6d0158862a4b9d3b7156499999d77b131688dd5e9665730caad
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 20 May 2023 08:09:20 -0700
	      Finished:     Sat, 20 May 2023 08:09:20 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6vbz (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-q6vbz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  21s   default-scheduler  Successfully assigned default/busybox-mount to functional-537000
	  Normal  Pulling    21s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.07770742s (2.077711004s including waiting)
	  Normal  Created    19s   kubelet            Created container mount-munger
	  Normal  Started    19s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5dd9cbfd69-nzh6s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-77j67" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-537000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-nzh6s kubernetes-dashboard-5c5cfc8747-77j67: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-027000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-027000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in d518566dba52
	Removing intermediate container d518566dba52
	 ---> 000d87db330d
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 5557af3c6136
	Removing intermediate container 5557af3c6136
	 ---> 0c1bec15807d
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 84ffe0fcefdb
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-027000 -n image-027000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-027000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh findmnt            | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| start          | -p functional-537000                     | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-537000 --dry-run           | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-537000                     | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -p functional-537000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-537000 ssh pgrep              | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-537000 image build -t         | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | localhost/my-image:functional-537000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-537000 image ls               | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	| image          | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| delete         | -p functional-537000                     | functional-537000 | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	| start          | -p image-027000 --driver=qemu2           | image-027000      | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:10 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000      | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-027000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000      | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-027000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:09:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:09:47.427879    2351 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:09:47.428031    2351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:47.428033    2351 out.go:309] Setting ErrFile to fd 2...
	I0520 08:09:47.428035    2351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:47.428103    2351 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:09:47.429195    2351 out.go:303] Setting JSON to false
	I0520 08:09:47.447280    2351 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":558,"bootTime":1684594829,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:09:47.447355    2351 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:09:47.451115    2351 out.go:177] * [image-027000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:09:47.455190    2351 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:09:47.455216    2351 notify.go:220] Checking for updates...
	I0520 08:09:47.462144    2351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:09:47.465147    2351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:09:47.468947    2351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:09:47.471144    2351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:09:47.474176    2351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:09:47.477245    2351 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:09:47.481082    2351 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:09:47.488216    2351 start.go:295] selected driver: qemu2
	I0520 08:09:47.488219    2351 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:09:47.488225    2351 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:09:47.488285    2351 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:09:47.492112    2351 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:09:47.498499    2351 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 08:09:47.498584    2351 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:09:47.498595    2351 cni.go:84] Creating CNI manager for ""
	I0520 08:09:47.498602    2351 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:09:47.498606    2351 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:09:47.498612    2351 start_flags.go:319] config:
	{Name:image-027000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:image-027000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:09:47.498697    2351 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:09:47.506176    2351 out.go:177] * Starting control plane node image-027000 in cluster image-027000
	I0520 08:09:47.510191    2351 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:09:47.510216    2351 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:09:47.510228    2351 cache.go:57] Caching tarball of preloaded images
	I0520 08:09:47.510310    2351 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:09:47.510318    2351 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:09:47.510537    2351 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/config.json ...
	I0520 08:09:47.510548    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/config.json: {Name:mkbb77569158ecf7ce79fae66462aa5d86033633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:09:47.510763    2351 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:09:47.510772    2351 start.go:364] acquiring machines lock for image-027000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:09:47.510802    2351 start.go:368] acquired machines lock for "image-027000" in 25.583µs
	I0520 08:09:47.510814    2351 start.go:93] Provisioning new machine with config: &{Name:image-027000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:image-027000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:09:47.510836    2351 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:09:47.518018    2351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 08:09:47.541459    2351 start.go:159] libmachine.API.Create for "image-027000" (driver="qemu2")
	I0520 08:09:47.541495    2351 client.go:168] LocalClient.Create starting
	I0520 08:09:47.541550    2351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:09:47.541698    2351 main.go:141] libmachine: Decoding PEM data...
	I0520 08:09:47.541706    2351 main.go:141] libmachine: Parsing certificate...
	I0520 08:09:47.541754    2351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:09:47.541836    2351 main.go:141] libmachine: Decoding PEM data...
	I0520 08:09:47.541841    2351 main.go:141] libmachine: Parsing certificate...
	I0520 08:09:47.542116    2351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:09:48.112982    2351 main.go:141] libmachine: Creating SSH key...
	I0520 08:09:48.248600    2351 main.go:141] libmachine: Creating Disk image...
	I0520 08:09:48.248604    2351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:09:48.248787    2351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2
	I0520 08:09:48.264850    2351 main.go:141] libmachine: STDOUT: 
	I0520 08:09:48.264862    2351 main.go:141] libmachine: STDERR: 
	I0520 08:09:48.264910    2351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2 +20000M
	I0520 08:09:48.271950    2351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:09:48.271958    2351 main.go:141] libmachine: STDERR: 
	I0520 08:09:48.271975    2351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2
	I0520 08:09:48.271978    2351 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:09:48.272015    2351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:9f:53:ae:c6:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/disk.qcow2
	I0520 08:09:48.306594    2351 main.go:141] libmachine: STDOUT: 
	I0520 08:09:48.306611    2351 main.go:141] libmachine: STDERR: 
	I0520 08:09:48.306614    2351 main.go:141] libmachine: Attempt 0
	I0520 08:09:48.306626    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:48.306821    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:48.306839    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:48.306855    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:48.306860    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:09:50.309046    2351 main.go:141] libmachine: Attempt 1
	I0520 08:09:50.309094    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:50.309491    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:50.309536    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:50.309563    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:50.309605    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:09:52.311744    2351 main.go:141] libmachine: Attempt 2
	I0520 08:09:52.311758    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:52.311856    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:52.311866    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:52.311871    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:52.311875    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:09:54.313897    2351 main.go:141] libmachine: Attempt 3
	I0520 08:09:54.313902    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:54.313948    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:54.313961    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:54.313966    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:54.313970    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:09:56.315983    2351 main.go:141] libmachine: Attempt 4
	I0520 08:09:56.315988    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:56.316023    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:56.316029    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:56.316033    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:56.316037    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:09:58.318070    2351 main.go:141] libmachine: Attempt 5
	I0520 08:09:58.318078    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:09:58.318150    2351 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0520 08:09:58.318158    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:09:58.318162    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:09:58.318166    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:00.320250    2351 main.go:141] libmachine: Attempt 6
	I0520 08:10:00.320270    2351 main.go:141] libmachine: Searching for ea:9f:53:ae:c6:fe in /var/db/dhcpd_leases ...
	I0520 08:10:00.320393    2351 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:00.320429    2351 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:00.320435    2351 main.go:141] libmachine: Found match: ea:9f:53:ae:c6:fe
	I0520 08:10:00.320448    2351 main.go:141] libmachine: IP: 192.168.105.5
	I0520 08:10:00.320456    2351 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0520 08:10:02.341516    2351 machine.go:88] provisioning docker machine ...
	I0520 08:10:02.341582    2351 buildroot.go:166] provisioning hostname "image-027000"
	I0520 08:10:02.341920    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:02.342981    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:02.342999    2351 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-027000 && echo "image-027000" | sudo tee /etc/hostname
	I0520 08:10:02.428132    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: image-027000
	
	I0520 08:10:02.432226    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:02.432645    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:02.432656    2351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-027000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-027000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-027000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 08:10:02.497441    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 08:10:02.497452    2351 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16543-1012/.minikube CaCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16543-1012/.minikube}
	I0520 08:10:02.497461    2351 buildroot.go:174] setting up certificates
	I0520 08:10:02.497468    2351 provision.go:83] configureAuth start
	I0520 08:10:02.497472    2351 provision.go:138] copyHostCerts
	I0520 08:10:02.497559    2351 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem, removing ...
	I0520 08:10:02.497564    2351 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem
	I0520 08:10:02.497693    2351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem (1082 bytes)
	I0520 08:10:02.498213    2351 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem, removing ...
	I0520 08:10:02.498216    2351 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem
	I0520 08:10:02.499034    2351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem (1123 bytes)
	I0520 08:10:02.499166    2351 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem, removing ...
	I0520 08:10:02.499168    2351 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem
	I0520 08:10:02.500449    2351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem (1679 bytes)
	I0520 08:10:02.500723    2351 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem org=jenkins.image-027000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-027000]
	I0520 08:10:02.598426    2351 provision.go:172] copyRemoteCerts
	I0520 08:10:02.598470    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 08:10:02.598476    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:02.628865    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 08:10:02.635789    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 08:10:02.642452    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 08:10:02.649746    2351 provision.go:86] duration metric: configureAuth took 152.270417ms
	I0520 08:10:02.649750    2351 buildroot.go:189] setting minikube options for container-runtime
	I0520 08:10:02.649843    2351 config.go:182] Loaded profile config "image-027000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:10:02.649872    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:02.650084    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:02.650087    2351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 08:10:02.703713    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 08:10:02.703719    2351 buildroot.go:70] root file system type: tmpfs
	I0520 08:10:02.703771    2351 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 08:10:02.703816    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:02.704041    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:02.704074    2351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 08:10:02.764021    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 08:10:02.764064    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:02.764308    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:02.764315    2351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 08:10:03.054115    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 08:10:03.054123    2351 machine.go:91] provisioned docker machine in 712.591917ms
	I0520 08:10:03.054126    2351 client.go:171] LocalClient.Create took 15.51265575s
	I0520 08:10:03.054140    2351 start.go:167] duration metric: libmachine.API.Create for "image-027000" took 15.512711417s
	I0520 08:10:03.054143    2351 start.go:300] post-start starting for "image-027000" (driver="qemu2")
	I0520 08:10:03.054145    2351 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 08:10:03.054209    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 08:10:03.054216    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:03.085248    2351 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 08:10:03.086767    2351 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 08:10:03.086773    2351 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/addons for local assets ...
	I0520 08:10:03.086834    2351 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/files for local assets ...
	I0520 08:10:03.086942    2351 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0520 08:10:03.087049    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 08:10:03.089808    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0520 08:10:03.096711    2351 start.go:303] post-start completed in 42.563833ms
	I0520 08:10:03.097064    2351 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/config.json ...
	I0520 08:10:03.097217    2351 start.go:128] duration metric: createHost completed in 15.586404792s
	I0520 08:10:03.097248    2351 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:03.097468    2351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048306d0] 0x104833130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0520 08:10:03.097471    2351 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 08:10:03.152676    2351 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684595403.443921585
	
	I0520 08:10:03.152681    2351 fix.go:207] guest clock: 1684595403.443921585
	I0520 08:10:03.152684    2351 fix.go:220] Guest: 2023-05-20 08:10:03.443921585 -0700 PDT Remote: 2023-05-20 08:10:03.09722 -0700 PDT m=+15.690488584 (delta=346.701585ms)
	I0520 08:10:03.152694    2351 fix.go:191] guest clock delta is within tolerance: 346.701585ms
	I0520 08:10:03.152696    2351 start.go:83] releasing machines lock for "image-027000", held for 15.641918417s
	I0520 08:10:03.153018    2351 ssh_runner.go:195] Run: cat /version.json
	I0520 08:10:03.153024    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:03.153042    2351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 08:10:03.153060    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:03.228832    2351 ssh_runner.go:195] Run: systemctl --version
	I0520 08:10:03.231016    2351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 08:10:03.232969    2351 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 08:10:03.233002    2351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 08:10:03.238285    2351 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 08:10:03.238292    2351 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:10:03.238384    2351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:10:03.249607    2351 docker.go:633] Got preloaded images: 
	I0520 08:10:03.249612    2351 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0520 08:10:03.249657    2351 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:03.258398    2351 ssh_runner.go:195] Run: which lz4
	I0520 08:10:03.260944    2351 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 08:10:03.262906    2351 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 08:10:03.262923    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0520 08:10:04.539698    2351 docker.go:597] Took 1.278827 seconds to copy over tarball
	I0520 08:10:04.539743    2351 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 08:10:05.542020    2351 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.002267542s)
	I0520 08:10:05.542028    2351 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 08:10:05.558485    2351 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:05.561763    2351 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0520 08:10:05.566986    2351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:05.638386    2351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:10:06.931907    2351 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.293511666s)
	I0520 08:10:06.931925    2351 start.go:481] detecting cgroup driver to use...
	I0520 08:10:06.931989    2351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:10:06.939225    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 08:10:06.942401    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 08:10:06.945479    2351 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 08:10:06.945505    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 08:10:06.948945    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:10:06.952183    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 08:10:06.955166    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:10:06.958243    2351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 08:10:06.961910    2351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 08:10:06.965686    2351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 08:10:06.968950    2351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 08:10:06.971834    2351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:07.034010    2351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 08:10:07.044293    2351 start.go:481] detecting cgroup driver to use...
	I0520 08:10:07.044349    2351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 08:10:07.050021    2351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:10:07.054886    2351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 08:10:07.064055    2351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:10:07.068592    2351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:10:07.073089    2351 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 08:10:07.130723    2351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:10:07.136727    2351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:10:07.142684    2351 ssh_runner.go:195] Run: which cri-dockerd
	I0520 08:10:07.144046    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 08:10:07.146797    2351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 08:10:07.152140    2351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 08:10:07.215408    2351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 08:10:07.294620    2351 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 08:10:07.294629    2351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0520 08:10:07.300074    2351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:07.375384    2351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:10:08.508897    2351 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.133503583s)
	I0520 08:10:08.508978    2351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 08:10:08.574771    2351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 08:10:08.654072    2351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 08:10:08.735427    2351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:08.804114    2351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 08:10:08.811066    2351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:08.877978    2351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0520 08:10:08.901327    2351 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 08:10:08.901402    2351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 08:10:08.903450    2351 start.go:549] Will wait 60s for crictl version
	I0520 08:10:08.903497    2351 ssh_runner.go:195] Run: which crictl
	I0520 08:10:08.904920    2351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 08:10:08.920574    2351 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0520 08:10:08.920643    2351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:10:08.929710    2351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:10:08.950903    2351 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0520 08:10:08.950990    2351 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0520 08:10:08.952398    2351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:10:08.956176    2351 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:10:08.956221    2351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:10:08.964989    2351 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 08:10:08.964994    2351 docker.go:563] Images already preloaded, skipping extraction
	I0520 08:10:08.965049    2351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:10:08.978849    2351 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 08:10:08.978856    2351 cache_images.go:84] Images are preloaded, skipping loading
	I0520 08:10:08.978905    2351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 08:10:08.988605    2351 cni.go:84] Creating CNI manager for ""
	I0520 08:10:08.988611    2351 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:10:08.988621    2351 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0520 08:10:08.988633    2351 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-027000 NodeName:image-027000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 08:10:08.988700    2351 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-027000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 08:10:08.988732    2351 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-027000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:image-027000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0520 08:10:08.988778    2351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0520 08:10:08.991714    2351 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 08:10:08.991739    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 08:10:08.994434    2351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0520 08:10:08.999640    2351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 08:10:09.004510    2351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0520 08:10:09.009809    2351 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0520 08:10:09.011357    2351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:10:09.014867    2351 certs.go:56] Setting up /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000 for IP: 192.168.105.5
	I0520 08:10:09.014875    2351 certs.go:190] acquiring lock for shared ca certs: {Name:mk455286e32296d088c043e2094c607a8fa5e5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.015010    2351 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key
	I0520 08:10:09.015195    2351 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key
	I0520 08:10:09.015221    2351 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.key
	I0520 08:10:09.015228    2351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.crt with IP's: []
	I0520 08:10:09.202198    2351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.crt ...
	I0520 08:10:09.202202    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.crt: {Name:mk07e8d43aa3b70bdb0ff22a093771d29fe7e80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.202423    2351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.key ...
	I0520 08:10:09.202425    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/client.key: {Name:mk70470e49ab8bf5525e9aa8454703fd97021188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.202541    2351 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key.e69b33ca
	I0520 08:10:09.202547    2351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0520 08:10:09.278771    2351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt.e69b33ca ...
	I0520 08:10:09.278773    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt.e69b33ca: {Name:mk45114c2497961f694eb65b58641d57bdc90ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.278892    2351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key.e69b33ca ...
	I0520 08:10:09.278894    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key.e69b33ca: {Name:mk8f5a3949399dc7f79cadced6e4a5e089eace0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.278993    2351 certs.go:337] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt
	I0520 08:10:09.279085    2351 certs.go:341] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key
	I0520 08:10:09.279165    2351 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.key
	I0520 08:10:09.279171    2351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.crt with IP's: []
	I0520 08:10:09.364962    2351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.crt ...
	I0520 08:10:09.364968    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.crt: {Name:mkbcef93718edff63fed0b1265f683a9957ff5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.365155    2351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.key ...
	I0520 08:10:09.365158    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.key: {Name:mkc93b0817c2c9cdcf072f988a2140166f3e3e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:09.365395    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437.pem (1338 bytes)
	W0520 08:10:09.365577    2351 certs.go:433] ignoring /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0520 08:10:09.365589    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 08:10:09.365618    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem (1082 bytes)
	I0520 08:10:09.365640    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem (1123 bytes)
	I0520 08:10:09.365661    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem (1679 bytes)
	I0520 08:10:09.365717    2351 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0520 08:10:09.365988    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0520 08:10:09.373956    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 08:10:09.381723    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 08:10:09.389397    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/image-027000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 08:10:09.396745    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 08:10:09.403838    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 08:10:09.410863    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 08:10:09.418218    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 08:10:09.425717    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0520 08:10:09.433122    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 08:10:09.440110    2351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0520 08:10:09.446933    2351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 08:10:09.452196    2351 ssh_runner.go:195] Run: openssl version
	I0520 08:10:09.454288    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0520 08:10:09.457668    2351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0520 08:10:09.459303    2351 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 20 15:06 /usr/share/ca-certificates/14372.pem
	I0520 08:10:09.459322    2351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0520 08:10:09.461255    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 08:10:09.464489    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 08:10:09.467433    2351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:10:09.468947    2351 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 20 15:04 /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:10:09.468966    2351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:10:09.470877    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 08:10:09.474326    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0520 08:10:09.477825    2351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0520 08:10:09.479372    2351 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 20 15:06 /usr/share/ca-certificates/1437.pem
	I0520 08:10:09.479393    2351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0520 08:10:09.481243    2351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0520 08:10:09.484411    2351 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0520 08:10:09.485893    2351 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0520 08:10:09.485921    2351 kubeadm.go:404] StartCluster: {Name:image-027000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.2 ClusterName:image-027000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:10:09.485983    2351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 08:10:09.493037    2351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 08:10:09.496264    2351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 08:10:09.499108    2351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 08:10:09.502301    2351 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 08:10:09.502311    2351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 08:10:09.524741    2351 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0520 08:10:09.524769    2351 kubeadm.go:322] [preflight] Running pre-flight checks
	I0520 08:10:09.583050    2351 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 08:10:09.583104    2351 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 08:10:09.583154    2351 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 08:10:09.642571    2351 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 08:10:09.653134    2351 out.go:204]   - Generating certificates and keys ...
	I0520 08:10:09.653227    2351 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0520 08:10:09.653260    2351 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0520 08:10:09.752316    2351 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 08:10:09.838068    2351 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0520 08:10:09.873956    2351 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0520 08:10:10.039818    2351 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0520 08:10:10.200693    2351 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0520 08:10:10.200763    2351 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-027000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0520 08:10:10.306631    2351 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0520 08:10:10.306695    2351 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-027000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0520 08:10:10.416870    2351 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 08:10:10.683425    2351 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 08:10:10.757104    2351 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0520 08:10:10.757131    2351 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 08:10:10.842610    2351 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 08:10:10.935375    2351 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 08:10:11.001386    2351 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 08:10:11.076018    2351 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 08:10:11.082545    2351 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 08:10:11.082587    2351 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 08:10:11.082610    2351 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0520 08:10:11.168556    2351 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 08:10:11.187065    2351 out.go:204]   - Booting up control plane ...
	I0520 08:10:11.187117    2351 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 08:10:11.187157    2351 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 08:10:11.187189    2351 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 08:10:11.187235    2351 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 08:10:11.187351    2351 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 08:10:15.172171    2351 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001964 seconds
	I0520 08:10:15.172247    2351 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 08:10:15.177997    2351 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 08:10:15.695362    2351 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 08:10:15.695590    2351 kubeadm.go:322] [mark-control-plane] Marking the node image-027000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 08:10:16.200666    2351 kubeadm.go:322] [bootstrap-token] Using token: cg61mv.pyw8v639uxe2crc2
	I0520 08:10:16.204298    2351 out.go:204]   - Configuring RBAC rules ...
	I0520 08:10:16.204343    2351 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 08:10:16.205350    2351 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 08:10:16.212726    2351 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 08:10:16.213932    2351 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 08:10:16.215131    2351 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 08:10:16.216499    2351 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 08:10:16.220427    2351 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 08:10:16.384553    2351 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0520 08:10:16.608543    2351 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0520 08:10:16.609755    2351 kubeadm.go:322] 
	I0520 08:10:16.609790    2351 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0520 08:10:16.609793    2351 kubeadm.go:322] 
	I0520 08:10:16.609824    2351 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0520 08:10:16.609826    2351 kubeadm.go:322] 
	I0520 08:10:16.609835    2351 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0520 08:10:16.609861    2351 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 08:10:16.609883    2351 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 08:10:16.609884    2351 kubeadm.go:322] 
	I0520 08:10:16.609909    2351 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0520 08:10:16.609910    2351 kubeadm.go:322] 
	I0520 08:10:16.609945    2351 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 08:10:16.609948    2351 kubeadm.go:322] 
	I0520 08:10:16.609974    2351 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0520 08:10:16.610032    2351 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 08:10:16.610070    2351 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 08:10:16.610071    2351 kubeadm.go:322] 
	I0520 08:10:16.610118    2351 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 08:10:16.610161    2351 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0520 08:10:16.610162    2351 kubeadm.go:322] 
	I0520 08:10:16.610202    2351 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cg61mv.pyw8v639uxe2crc2 \
	I0520 08:10:16.610254    2351 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 \
	I0520 08:10:16.610263    2351 kubeadm.go:322] 	--control-plane 
	I0520 08:10:16.610283    2351 kubeadm.go:322] 
	I0520 08:10:16.610323    2351 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0520 08:10:16.610325    2351 kubeadm.go:322] 
	I0520 08:10:16.610361    2351 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cg61mv.pyw8v639uxe2crc2 \
	I0520 08:10:16.610410    2351 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 
	I0520 08:10:16.610476    2351 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 08:10:16.610564    2351 kubeadm.go:322] W0520 15:10:09.874436    1307 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0520 08:10:16.610644    2351 kubeadm.go:322] W0520 15:10:11.459955    1307 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0520 08:10:16.610652    2351 cni.go:84] Creating CNI manager for ""
	I0520 08:10:16.610660    2351 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:10:16.619646    2351 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 08:10:16.622360    2351 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 08:10:16.625711    2351 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0520 08:10:16.630935    2351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 08:10:16.630992    2351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033 minikube.k8s.io/name=image-027000 minikube.k8s.io/updated_at=2023_05_20T08_10_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:10:16.630993    2351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:10:16.684773    2351 kubeadm.go:1076] duration metric: took 53.814792ms to wait for elevateKubeSystemPrivileges.
	I0520 08:10:16.694674    2351 ops.go:34] apiserver oom_adj: -16
	I0520 08:10:16.694680    2351 kubeadm.go:406] StartCluster complete in 7.208772625s
	I0520 08:10:16.694694    2351 settings.go:142] acquiring lock: {Name:mk59154b06c6365bdac4601706e783ce490a045a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:16.694771    2351 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:10:16.695125    2351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/kubeconfig: {Name:mkf6fa7fb711448995f7c2c1a6e60e631893d6a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:16.695285    2351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 08:10:16.695343    2351 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0520 08:10:16.695376    2351 addons.go:66] Setting storage-provisioner=true in profile "image-027000"
	I0520 08:10:16.695382    2351 addons.go:228] Setting addon storage-provisioner=true in "image-027000"
	I0520 08:10:16.695398    2351 addons.go:66] Setting default-storageclass=true in profile "image-027000"
	I0520 08:10:16.695405    2351 host.go:66] Checking if "image-027000" exists ...
	I0520 08:10:16.695405    2351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-027000"
	I0520 08:10:16.695494    2351 config.go:182] Loaded profile config "image-027000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:10:16.701313    2351 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:10:16.705223    2351 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 08:10:16.705229    2351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 08:10:16.705237    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:16.710475    2351 addons.go:228] Setting addon default-storageclass=true in "image-027000"
	I0520 08:10:16.710490    2351 host.go:66] Checking if "image-027000" exists ...
	I0520 08:10:16.711225    2351 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 08:10:16.711229    2351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 08:10:16.711235    2351 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/image-027000/id_rsa Username:docker}
	I0520 08:10:16.742581    2351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 08:10:16.745235    2351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 08:10:16.749732    2351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 08:10:17.207090    2351 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0520 08:10:17.216666    2351 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-027000" context rescaled to 1 replicas
	I0520 08:10:17.216679    2351 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:10:17.219640    2351 out.go:177] * Verifying Kubernetes components...
	I0520 08:10:17.228293    2351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 08:10:17.306227    2351 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 08:10:17.303157    2351 api_server.go:52] waiting for apiserver process to appear ...
	I0520 08:10:17.313304    2351 addons.go:499] enable addons completed in 617.963167ms: enabled=[storage-provisioner default-storageclass]
	I0520 08:10:17.313329    2351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 08:10:17.317678    2351 api_server.go:72] duration metric: took 100.991083ms to wait for apiserver process to appear ...
	I0520 08:10:17.317681    2351 api_server.go:88] waiting for apiserver healthz status ...
	I0520 08:10:17.317690    2351 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0520 08:10:17.320709    2351 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0520 08:10:17.321264    2351 api_server.go:141] control plane version: v1.27.2
	I0520 08:10:17.321268    2351 api_server.go:131] duration metric: took 3.585ms to wait for apiserver health ...
	I0520 08:10:17.321270    2351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 08:10:17.324035    2351 system_pods.go:59] 5 kube-system pods found
	I0520 08:10:17.324040    2351 system_pods.go:61] "etcd-image-027000" [3473b665-064f-4538-8d83-64c046a4ebc8] Pending
	I0520 08:10:17.324042    2351 system_pods.go:61] "kube-apiserver-image-027000" [4e6ec967-1baa-42dc-ad55-21c7d5762d5a] Pending
	I0520 08:10:17.324044    2351 system_pods.go:61] "kube-controller-manager-image-027000" [d8776dfe-5918-49f7-8b4d-9ccdb7edf5f0] Pending
	I0520 08:10:17.324046    2351 system_pods.go:61] "kube-scheduler-image-027000" [94ede0e0-0a9f-4a91-800d-3740c597c813] Pending
	I0520 08:10:17.324048    2351 system_pods.go:61] "storage-provisioner" [4dd0d88f-264a-4147-a2ae-3c8c2408dec9] Pending
	I0520 08:10:17.324049    2351 system_pods.go:74] duration metric: took 2.778041ms to wait for pod list to return data ...
	I0520 08:10:17.324052    2351 kubeadm.go:581] duration metric: took 107.365375ms to wait for : map[apiserver:true system_pods:true] ...
	I0520 08:10:17.324057    2351 node_conditions.go:102] verifying NodePressure condition ...
	I0520 08:10:17.325443    2351 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0520 08:10:17.325449    2351 node_conditions.go:123] node cpu capacity is 2
	I0520 08:10:17.325454    2351 node_conditions.go:105] duration metric: took 1.395666ms to run NodePressure ...
	I0520 08:10:17.325457    2351 start.go:228] waiting for startup goroutines ...
	I0520 08:10:17.325460    2351 start.go:233] waiting for cluster config update ...
	I0520 08:10:17.325464    2351 start.go:242] writing updated cluster config ...
	I0520 08:10:17.325716    2351 ssh_runner.go:195] Run: rm -f paused
	I0520 08:10:17.355924    2351 start.go:568] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0520 08:10:17.359338    2351 out.go:177] 
	W0520 08:10:17.363443    2351 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0520 08:10:17.367298    2351 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0520 08:10:17.381443    2351 out.go:177] * Done! kubectl is now configured to use "image-027000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-05-20 15:09:59 UTC, ends at Sat 2023-05-20 15:10:20 UTC. --
	May 20 15:10:12 image-027000 cri-dockerd[1137]: time="2023-05-20T15:10:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2f5698edf8fc7cec5faf1596668ccd43d6ad1826af16a824545c8864f923a614/resolv.conf as [nameserver 192.168.105.1]"
	May 20 15:10:12 image-027000 cri-dockerd[1137]: time="2023-05-20T15:10:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/baab361d7f23c14f0f3c38e0cb998ebf3e8f11f3062ef74b1e137b1484c372fe/resolv.conf as [nameserver 192.168.105.1]"
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.371575048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.371629673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.371642173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.371651589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.422775548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.423183173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.423195339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.423203923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.428945923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.428993423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.429003839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:10:12 image-027000 dockerd[923]: time="2023-05-20T15:10:12.429011881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:19 image-027000 dockerd[917]: time="2023-05-20T15:10:19.865138343Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 20 15:10:20 image-027000 dockerd[917]: time="2023-05-20T15:10:20.022685968Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 20 15:10:20 image-027000 dockerd[917]: time="2023-05-20T15:10:20.055163260Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.100330968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.100358385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.100368593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.100374760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:10:20 image-027000 dockerd[917]: time="2023-05-20T15:10:20.248263510Z" level=info msg="ignoring event" container=84ffe0fcefdb5ae9240704ee24d193f52eb1ccd1f447ce122ae0999436b66f78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.248617302Z" level=info msg="shim disconnected" id=84ffe0fcefdb5ae9240704ee24d193f52eb1ccd1f447ce122ae0999436b66f78 namespace=moby
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.248647427Z" level=warning msg="cleaning up after shim disconnected" id=84ffe0fcefdb5ae9240704ee24d193f52eb1ccd1f447ce122ae0999436b66f78 namespace=moby
	May 20 15:10:20 image-027000 dockerd[923]: time="2023-05-20T15:10:20.248651468Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	04cc9796d6f8d       72c9df6be7f1b       8 seconds ago       Running             kube-apiserver            0                   baab361d7f23c
	e97f8f7348e9a       305d7ed1dae28       8 seconds ago       Running             kube-scheduler            0                   2f5698edf8fc7
	c4b124cdb4021       24bc64e911039       8 seconds ago       Running             etcd                      0                   37c05b14dc691
	66eaab5ec87fd       2ee705380c3c5       8 seconds ago       Running             kube-controller-manager   0                   043bd0c4b8689
	
	* 
	* ==> describe nodes <==
	* Name:               image-027000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-027000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033
	                    minikube.k8s.io/name=image-027000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_20T08_10_16_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 May 2023 15:10:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-027000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 May 2023 15:10:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 May 2023 15:10:19 +0000   Sat, 20 May 2023 15:10:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 May 2023 15:10:19 +0000   Sat, 20 May 2023 15:10:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 May 2023 15:10:19 +0000   Sat, 20 May 2023 15:10:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 May 2023 15:10:19 +0000   Sat, 20 May 2023 15:10:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-027000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905972Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9db1041d74048fa8a9acb4516d0459a
	  System UUID:                c9db1041d74048fa8a9acb4516d0459a
	  Boot ID:                    d3f3f0ec-9a9d-42a6-ac9d-30ef69c9fe51
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-027000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-027000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-027000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-027000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet  Node image-027000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet  Node image-027000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)  kubelet  Node image-027000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s               kubelet  Node image-027000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s               kubelet  Node image-027000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s               kubelet  Node image-027000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                1s               kubelet  Node image-027000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May20 15:09] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650534] EINJ: EINJ table not found.
	[  +0.535231] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.043831] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000922] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[May20 15:10] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.062884] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +2.689095] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +1.402971] systemd-fstab-generator[849]: Ignoring "noauto" for root device
	[  +0.179876] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.080900] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +0.079997] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +1.127259] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.069946] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.081907] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.081507] systemd-fstab-generator[1079]: Ignoring "noauto" for root device
	[  +0.068050] systemd-fstab-generator[1090]: Ignoring "noauto" for root device
	[  +0.072931] systemd-fstab-generator[1130]: Ignoring "noauto" for root device
	[  +2.283970] systemd-fstab-generator[1400]: Ignoring "noauto" for root device
	[  +5.113601] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +3.544131] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [c4b124cdb402] <==
	* {"level":"info","ts":"2023-05-20T15:10:12.614Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-20T15:10:12.614Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-20T15:10:12.614Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-20T15:10:12.619Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-05-20T15:10:12.619Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-05-20T15:10:12.619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-05-20T15:10:12.619Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-05-20T15:10:13.594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-20T15:10:13.594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-20T15:10:13.594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-05-20T15:10:13.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-05-20T15:10:13.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-05-20T15:10:13.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-05-20T15:10:13.595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-05-20T15:10:13.596Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:10:13.597Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-027000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-20T15:10:13.598Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:10:13.598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-20T15:10:13.598Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-20T15:10:13.598Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:10:13.599Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:10:13.599Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-20T15:10:13.598Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-20T15:10:13.601Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-20T15:10:13.601Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	
	* 
	* ==> kernel <==
	*  15:10:20 up 0 min,  0 users,  load average: 1.39, 0.30, 0.10
	Linux image-027000 5.10.57 #1 SMP PREEMPT Mon May 15 19:29:44 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [04cc9796d6f8] <==
	* I0520 15:10:14.314180       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0520 15:10:14.332076       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 15:10:14.332131       1 shared_informer.go:318] Caches are synced for configmaps
	I0520 15:10:14.332087       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0520 15:10:14.332239       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0520 15:10:14.332495       1 controller.go:624] quota admission added evaluator for: namespaces
	I0520 15:10:14.332674       1 cache.go:39] Caches are synced for autoregister controller
	I0520 15:10:14.332763       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0520 15:10:14.332873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 15:10:14.342127       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0520 15:10:14.360981       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 15:10:15.104050       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 15:10:15.239868       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 15:10:15.243471       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 15:10:15.243486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 15:10:15.422171       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 15:10:15.437628       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 15:10:15.507679       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0520 15:10:15.510140       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0520 15:10:15.510536       1 controller.go:624] quota admission added evaluator for: endpoints
	I0520 15:10:15.511925       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 15:10:16.304025       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0520 15:10:16.671173       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0520 15:10:16.675315       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0520 15:10:16.679601       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [66eaab5ec87f] <==
	* I0520 15:10:16.754196       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0520 15:10:16.754200       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	I0520 15:10:16.754225       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0520 15:10:16.754233       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0520 15:10:16.754259       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
	I0520 15:10:16.754265       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0520 15:10:16.754274       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I0520 15:10:16.754278       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0520 15:10:16.754287       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0520 15:10:16.754292       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0520 15:10:16.754306       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0520 15:10:16.754753       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0520 15:10:16.754772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0520 15:10:16.754786       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	E0520 15:10:16.802042       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0520 15:10:16.802055       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0520 15:10:16.952087       1 controllermanager.go:638] "Started controller" controller="deployment"
	I0520 15:10:16.952137       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0520 15:10:16.952141       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0520 15:10:17.159909       1 controllermanager.go:638] "Started controller" controller="disruption"
	I0520 15:10:17.159936       1 disruption.go:423] Sending events to api server.
	I0520 15:10:17.159956       1 disruption.go:434] Starting disruption controller
	I0520 15:10:17.159959       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0520 15:10:17.302671       1 controllermanager.go:638] "Started controller" controller="bootstrapsigner"
	I0520 15:10:17.302716       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	* 
	* ==> kube-scheduler [e97f8f7348e9] <==
	* W0520 15:10:14.310661       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 15:10:14.310690       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 15:10:14.310727       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 15:10:14.310736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 15:10:14.310759       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 15:10:14.310765       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 15:10:14.310833       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 15:10:14.310690       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 15:10:14.310662       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 15:10:14.310932       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 15:10:14.310790       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 15:10:14.310971       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 15:10:15.219127       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 15:10:15.219193       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 15:10:15.228731       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 15:10:15.228768       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 15:10:15.295608       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 15:10:15.295661       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 15:10:15.297381       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 15:10:15.297427       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 15:10:15.300970       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 15:10:15.301021       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 15:10:15.319332       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 15:10:15.319350       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 15:10:18.003417       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-05-20 15:09:59 UTC, ends at Sat 2023-05-20 15:10:20 UTC. --
	May 20 15:10:16 image-027000 kubelet[2270]: I0520 15:10:16.845556    2270 topology_manager.go:212] "Topology Admit Handler"
	May 20 15:10:16 image-027000 kubelet[2270]: I0520 15:10:16.845569    2270 topology_manager.go:212] "Topology Admit Handler"
	May 20 15:10:16 image-027000 kubelet[2270]: I0520 15:10:16.872723    2270 kubelet_node_status.go:70] "Attempting to register node" node="image-027000"
	May 20 15:10:16 image-027000 kubelet[2270]: I0520 15:10:16.876793    2270 kubelet_node_status.go:108] "Node was previously registered" node="image-027000"
	May 20 15:10:16 image-027000 kubelet[2270]: I0520 15:10:16.876831    2270 kubelet_node_status.go:73] "Successfully registered node" node="image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021869    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92e08e6d69d90847af27e586ecafb946-ca-certs\") pod \"kube-controller-manager-image-027000\" (UID: \"92e08e6d69d90847af27e586ecafb946\") " pod="kube-system/kube-controller-manager-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021893    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92e08e6d69d90847af27e586ecafb946-k8s-certs\") pod \"kube-controller-manager-image-027000\" (UID: \"92e08e6d69d90847af27e586ecafb946\") " pod="kube-system/kube-controller-manager-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021905    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92e08e6d69d90847af27e586ecafb946-usr-share-ca-certificates\") pod \"kube-controller-manager-image-027000\" (UID: \"92e08e6d69d90847af27e586ecafb946\") " pod="kube-system/kube-controller-manager-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021920    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b191e722c2131818ad5075d1522add-ca-certs\") pod \"kube-apiserver-image-027000\" (UID: \"98b191e722c2131818ad5075d1522add\") " pod="kube-system/kube-apiserver-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021929    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b191e722c2131818ad5075d1522add-k8s-certs\") pod \"kube-apiserver-image-027000\" (UID: \"98b191e722c2131818ad5075d1522add\") " pod="kube-system/kube-apiserver-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021938    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b191e722c2131818ad5075d1522add-usr-share-ca-certificates\") pod \"kube-apiserver-image-027000\" (UID: \"98b191e722c2131818ad5075d1522add\") " pod="kube-system/kube-apiserver-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021949    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92e08e6d69d90847af27e586ecafb946-flexvolume-dir\") pod \"kube-controller-manager-image-027000\" (UID: \"92e08e6d69d90847af27e586ecafb946\") " pod="kube-system/kube-controller-manager-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021958    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92e08e6d69d90847af27e586ecafb946-kubeconfig\") pod \"kube-controller-manager-image-027000\" (UID: \"92e08e6d69d90847af27e586ecafb946\") " pod="kube-system/kube-controller-manager-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021969    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eae2b68056b76120db86be9cc15aedd7-kubeconfig\") pod \"kube-scheduler-image-027000\" (UID: \"eae2b68056b76120db86be9cc15aedd7\") " pod="kube-system/kube-scheduler-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021977    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e7d793788e682ff2c52b559bbe31813c-etcd-certs\") pod \"etcd-image-027000\" (UID: \"e7d793788e682ff2c52b559bbe31813c\") " pod="kube-system/etcd-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.021986    2270 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e7d793788e682ff2c52b559bbe31813c-etcd-data\") pod \"etcd-image-027000\" (UID: \"e7d793788e682ff2c52b559bbe31813c\") " pod="kube-system/etcd-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.710260    2270 apiserver.go:52] "Watching apiserver"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.720976    2270 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.728245    2270 reconciler.go:41] "Reconciler: start to sync state"
	May 20 15:10:17 image-027000 kubelet[2270]: E0520 15:10:17.780775    2270 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-027000\" already exists" pod="kube-system/kube-apiserver-image-027000"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.789947    2270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-027000" podStartSLOduration=1.789922259 podCreationTimestamp="2023-05-20 15:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-20 15:10:17.78976455 +0000 UTC m=+1.131092127" watchObservedRunningTime="2023-05-20 15:10:17.789922259 +0000 UTC m=+1.131249835"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.793938    2270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-027000" podStartSLOduration=1.793918134 podCreationTimestamp="2023-05-20 15:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-20 15:10:17.793830384 +0000 UTC m=+1.135157960" watchObservedRunningTime="2023-05-20 15:10:17.793918134 +0000 UTC m=+1.135245710"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.797651    2270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-027000" podStartSLOduration=1.797635175 podCreationTimestamp="2023-05-20 15:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-20 15:10:17.797566217 +0000 UTC m=+1.138893794" watchObservedRunningTime="2023-05-20 15:10:17.797635175 +0000 UTC m=+1.138962752"
	May 20 15:10:17 image-027000 kubelet[2270]: I0520 15:10:17.807814    2270 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-027000" podStartSLOduration=1.807789467 podCreationTimestamp="2023-05-20 15:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-20 15:10:17.806433592 +0000 UTC m=+1.147761169" watchObservedRunningTime="2023-05-20 15:10:17.807789467 +0000 UTC m=+1.149117044"
	May 20 15:10:19 image-027000 kubelet[2270]: I0520 15:10:19.249728    2270 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-027000 -n image-027000
helpers_test.go:261: (dbg) Run:  kubectl --context image-027000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-027000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-027000 describe pod storage-provisioner: exit status 1 (39.276917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-027000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (54.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-371000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-371000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.705611s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-371000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-371000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [efa26872-25f8-49b1-a4bf-55bb27262e63] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [efa26872-25f8-49b1-a4bf-55bb27262e63] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.012509125s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-371000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.038410708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons disable ingress-dns --alsologtostderr -v=1: (4.127092792s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons disable ingress --alsologtostderr -v=1: (7.064546333s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-371000 -n ingress-addon-legacy-371000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | -p functional-537000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-537000 ssh pgrep              | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-537000 image build -t         | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | localhost/my-image:functional-537000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-537000 image ls               | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	| image          | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-537000                        | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-537000                     | functional-537000           | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:09 PDT |
	| start          | -p image-027000 --driver=qemu2           | image-027000                | jenkins | v1.30.1 | 20 May 23 08:09 PDT | 20 May 23 08:10 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000                | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-027000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000                | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-027000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000                | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-027000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-027000                | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-027000                          |                             |         |         |                     |                     |
	| delete         | -p image-027000                          | image-027000                | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:10 PDT |
	| start          | -p ingress-addon-legacy-371000           | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:10 PDT | 20 May 23 08:11 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-371000              | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:11 PDT | 20 May 23 08:11 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-371000              | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:11 PDT | 20 May 23 08:11 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-371000              | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:12 PDT | 20 May 23 08:12 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-371000 ip           | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:12 PDT | 20 May 23 08:12 PDT |
	| addons         | ingress-addon-legacy-371000              | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:12 PDT | 20 May 23 08:12 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-371000              | ingress-addon-legacy-371000 | jenkins | v1.30.1 | 20 May 23 08:12 PDT | 20 May 23 08:12 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:10:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:10:21.157345    2411 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:10:21.157474    2411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:10:21.157477    2411 out.go:309] Setting ErrFile to fd 2...
	I0520 08:10:21.157480    2411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:10:21.157560    2411 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:10:21.158610    2411 out.go:303] Setting JSON to false
	I0520 08:10:21.174932    2411 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":592,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:10:21.175015    2411 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:10:21.179416    2411 out.go:177] * [ingress-addon-legacy-371000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:10:21.187412    2411 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:10:21.187466    2411 notify.go:220] Checking for updates...
	I0520 08:10:21.194553    2411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:10:21.197504    2411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:10:21.201456    2411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:10:21.204554    2411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:10:21.207337    2411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:10:21.210638    2411 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:10:21.214441    2411 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:10:21.220383    2411 start.go:295] selected driver: qemu2
	I0520 08:10:21.220392    2411 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:10:21.220400    2411 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:10:21.222625    2411 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:10:21.226396    2411 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:10:21.229507    2411 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:10:21.229531    2411 cni.go:84] Creating CNI manager for ""
	I0520 08:10:21.229541    2411 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:10:21.229545    2411 start_flags.go:319] config:
	{Name:ingress-addon-legacy-371000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0520 08:10:21.229635    2411 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:10:21.237389    2411 out.go:177] * Starting control plane node ingress-addon-legacy-371000 in cluster ingress-addon-legacy-371000
	I0520 08:10:21.241453    2411 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0520 08:10:21.436055    2411 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0520 08:10:21.436152    2411 cache.go:57] Caching tarball of preloaded images
	I0520 08:10:21.436860    2411 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0520 08:10:21.441523    2411 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0520 08:10:21.449391    2411 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:10:21.665904    2411 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0520 08:10:33.169424    2411 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:10:33.169585    2411 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:10:33.919997    2411 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0520 08:10:33.920195    2411 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/config.json ...
	I0520 08:10:33.920218    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/config.json: {Name:mk3d395a29cdb1c480d5313c94c91459cd335273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:10:33.920463    2411 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:10:33.920473    2411 start.go:364] acquiring machines lock for ingress-addon-legacy-371000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:10:33.920500    2411 start.go:368] acquired machines lock for "ingress-addon-legacy-371000" in 23.583µs
	I0520 08:10:33.920519    2411 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:10:33.920552    2411 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:10:33.925548    2411 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0520 08:10:33.946111    2411 start.go:159] libmachine.API.Create for "ingress-addon-legacy-371000" (driver="qemu2")
	I0520 08:10:33.946147    2411 client.go:168] LocalClient.Create starting
	I0520 08:10:33.946241    2411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:10:33.946263    2411 main.go:141] libmachine: Decoding PEM data...
	I0520 08:10:33.946277    2411 main.go:141] libmachine: Parsing certificate...
	I0520 08:10:33.946321    2411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:10:33.946340    2411 main.go:141] libmachine: Decoding PEM data...
	I0520 08:10:33.946348    2411 main.go:141] libmachine: Parsing certificate...
	I0520 08:10:33.946677    2411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:10:34.108664    2411 main.go:141] libmachine: Creating SSH key...
	I0520 08:10:34.200678    2411 main.go:141] libmachine: Creating Disk image...
	I0520 08:10:34.200683    2411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:10:34.200827    2411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2
	I0520 08:10:34.216883    2411 main.go:141] libmachine: STDOUT: 
	I0520 08:10:34.216900    2411 main.go:141] libmachine: STDERR: 
	I0520 08:10:34.216952    2411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2 +20000M
	I0520 08:10:34.224061    2411 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:10:34.224075    2411 main.go:141] libmachine: STDERR: 
	I0520 08:10:34.224088    2411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2
	I0520 08:10:34.224101    2411 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:10:34.224130    2411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:58:8d:07:2e:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/disk.qcow2
	I0520 08:10:34.273178    2411 main.go:141] libmachine: STDOUT: 
	I0520 08:10:34.273201    2411 main.go:141] libmachine: STDERR: 
	I0520 08:10:34.273205    2411 main.go:141] libmachine: Attempt 0
	I0520 08:10:34.273220    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:34.273285    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:34.273308    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:34.273315    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:34.273321    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:34.273326    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:36.275444    2411 main.go:141] libmachine: Attempt 1
	I0520 08:10:36.275525    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:36.275975    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:36.276027    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:36.276099    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:36.276159    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:36.276190    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:38.278312    2411 main.go:141] libmachine: Attempt 2
	I0520 08:10:38.278342    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:38.278441    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:38.278454    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:38.278460    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:38.278465    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:38.278471    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:40.280558    2411 main.go:141] libmachine: Attempt 3
	I0520 08:10:40.280620    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:40.280739    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:40.280769    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:40.280775    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:40.280781    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:40.280787    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:42.282895    2411 main.go:141] libmachine: Attempt 4
	I0520 08:10:42.282934    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:42.283030    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:42.283037    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:42.283045    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:42.283051    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:42.283056    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:44.285127    2411 main.go:141] libmachine: Attempt 5
	I0520 08:10:44.285147    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:44.285213    2411 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0520 08:10:44.285223    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ea:9f:53:ae:c6:fe ID:1,ea:9f:53:ae:c6:fe Lease:0x646a3447}
	I0520 08:10:44.285229    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:6:5b:64:76:26:39 ID:1,6:5b:64:76:26:39 Lease:0x646a3369}
	I0520 08:10:44.285234    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:69:5a:cb:74:51 ID:1,8a:69:5a:cb:74:51 Lease:0x6468e1dc}
	I0520 08:10:44.285253    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:b1:60:c3:d2:35 ID:1,36:b1:60:c3:d2:35 Lease:0x646a3311}
	I0520 08:10:46.287321    2411 main.go:141] libmachine: Attempt 6
	I0520 08:10:46.287368    2411 main.go:141] libmachine: Searching for 32:58:8d:7:2e:d4 in /var/db/dhcpd_leases ...
	I0520 08:10:46.287488    2411 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0520 08:10:46.287501    2411 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:32:58:8d:7:2e:d4 ID:1,32:58:8d:7:2e:d4 Lease:0x646a3475}
	I0520 08:10:46.287507    2411 main.go:141] libmachine: Found match: 32:58:8d:7:2e:d4
	I0520 08:10:46.287522    2411 main.go:141] libmachine: IP: 192.168.105.6
	I0520 08:10:46.287529    2411 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0520 08:10:48.308535    2411 machine.go:88] provisioning docker machine ...
	I0520 08:10:48.308617    2411 buildroot.go:166] provisioning hostname "ingress-addon-legacy-371000"
	I0520 08:10:48.308902    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:48.309917    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:48.309944    2411 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-371000 && echo "ingress-addon-legacy-371000" | sudo tee /etc/hostname
	I0520 08:10:48.405979    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-371000
	
	I0520 08:10:48.406105    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:48.406599    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:48.406619    2411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-371000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-371000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-371000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 08:10:48.484288    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 08:10:48.484307    2411 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16543-1012/.minikube CaCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16543-1012/.minikube}
	I0520 08:10:48.484319    2411 buildroot.go:174] setting up certificates
	I0520 08:10:48.484333    2411 provision.go:83] configureAuth start
	I0520 08:10:48.484342    2411 provision.go:138] copyHostCerts
	I0520 08:10:48.484434    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem
	I0520 08:10:48.484519    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem, removing ...
	I0520 08:10:48.484527    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem
	I0520 08:10:48.484698    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.pem (1082 bytes)
	I0520 08:10:48.484934    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem
	I0520 08:10:48.484979    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem, removing ...
	I0520 08:10:48.484983    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem
	I0520 08:10:48.485041    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/cert.pem (1123 bytes)
	I0520 08:10:48.485161    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem
	I0520 08:10:48.485203    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem, removing ...
	I0520 08:10:48.485212    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem
	I0520 08:10:48.485276    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16543-1012/.minikube/key.pem (1679 bytes)
	I0520 08:10:48.485427    2411 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-371000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-371000]
	I0520 08:10:48.594093    2411 provision.go:172] copyRemoteCerts
	I0520 08:10:48.594162    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 08:10:48.594174    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:10:48.631273    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 08:10:48.631326    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 08:10:48.638421    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 08:10:48.638462    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0520 08:10:48.645382    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 08:10:48.645426    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 08:10:48.652317    2411 provision.go:86] duration metric: configureAuth took 167.976458ms
	I0520 08:10:48.652324    2411 buildroot.go:189] setting minikube options for container-runtime
	I0520 08:10:48.652435    2411 config.go:182] Loaded profile config "ingress-addon-legacy-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0520 08:10:48.652468    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:48.652682    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:48.652688    2411 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 08:10:48.715953    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 08:10:48.715959    2411 buildroot.go:70] root file system type: tmpfs
	I0520 08:10:48.716034    2411 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 08:10:48.716084    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:48.716345    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:48.716384    2411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 08:10:48.783851    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 08:10:48.783907    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:48.784156    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:48.784165    2411 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 08:10:49.111969    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 08:10:49.111992    2411 machine.go:91] provisioned docker machine in 803.434208ms
	I0520 08:10:49.111998    2411 client.go:171] LocalClient.Create took 15.165872625s
	I0520 08:10:49.112016    2411 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-371000" took 15.165933084s
	I0520 08:10:49.112023    2411 start.go:300] post-start starting for "ingress-addon-legacy-371000" (driver="qemu2")
	I0520 08:10:49.112027    2411 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 08:10:49.112116    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 08:10:49.112130    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:10:49.145798    2411 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 08:10:49.147535    2411 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 08:10:49.147544    2411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/addons for local assets ...
	I0520 08:10:49.147616    2411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16543-1012/.minikube/files for local assets ...
	I0520 08:10:49.147726    2411 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0520 08:10:49.147730    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0520 08:10:49.147848    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 08:10:49.156490    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0520 08:10:49.165306    2411 start.go:303] post-start completed in 53.27375ms
	I0520 08:10:49.165795    2411 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/config.json ...
	I0520 08:10:49.165976    2411 start.go:128] duration metric: createHost completed in 15.245447209s
	I0520 08:10:49.166036    2411 main.go:141] libmachine: Using SSH client type: native
	I0520 08:10:49.166265    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104f086d0] 0x104f0b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0520 08:10:49.166270    2411 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 08:10:49.230624    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684595449.562308502
	
	I0520 08:10:49.230635    2411 fix.go:207] guest clock: 1684595449.562308502
	I0520 08:10:49.230639    2411 fix.go:220] Guest: 2023-05-20 08:10:49.562308502 -0700 PDT Remote: 2023-05-20 08:10:49.165981 -0700 PDT m=+28.029499876 (delta=396.327502ms)
	I0520 08:10:49.230651    2411 fix.go:191] guest clock delta is within tolerance: 396.327502ms
	I0520 08:10:49.230654    2411 start.go:83] releasing machines lock for "ingress-addon-legacy-371000", held for 15.310175292s
	I0520 08:10:49.230975    2411 ssh_runner.go:195] Run: cat /version.json
	I0520 08:10:49.230984    2411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 08:10:49.230984    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:10:49.231006    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:10:49.263751    2411 ssh_runner.go:195] Run: systemctl --version
	I0520 08:10:49.265881    2411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 08:10:49.308397    2411 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 08:10:49.308442    2411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 08:10:49.311777    2411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 08:10:49.317244    2411 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 08:10:49.317253    2411 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0520 08:10:49.317330    2411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:10:49.330197    2411 docker.go:633] Got preloaded images: 
	I0520 08:10:49.330207    2411 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0520 08:10:49.330268    2411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:49.333686    2411 ssh_runner.go:195] Run: which lz4
	I0520 08:10:49.334909    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0520 08:10:49.335010    2411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 08:10:49.336456    2411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 08:10:49.336471    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0520 08:10:51.036758    2411 docker.go:597] Took 1.701793 seconds to copy over tarball
	I0520 08:10:51.036827    2411 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 08:10:52.361716    2411 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.32486375s)
	I0520 08:10:52.361735    2411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 08:10:52.387668    2411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:52.393145    2411 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0520 08:10:52.402602    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:52.476523    2411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:10:53.844289    2411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.367751208s)
	I0520 08:10:53.844318    2411 start.go:481] detecting cgroup driver to use...
	I0520 08:10:53.844401    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:10:53.850060    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0520 08:10:53.853921    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 08:10:53.857165    2411 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 08:10:53.857200    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 08:10:53.860116    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:10:53.862880    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 08:10:53.866211    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 08:10:53.869538    2411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 08:10:53.872924    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 08:10:53.875825    2411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 08:10:53.878654    2411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 08:10:53.881945    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:53.962475    2411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 08:10:53.972588    2411 start.go:481] detecting cgroup driver to use...
	I0520 08:10:53.972646    2411 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 08:10:53.978836    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:10:53.984167    2411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 08:10:53.990007    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 08:10:53.994729    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:10:53.999678    2411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 08:10:54.037886    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 08:10:54.043252    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 08:10:54.048707    2411 ssh_runner.go:195] Run: which cri-dockerd
	I0520 08:10:54.049910    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 08:10:54.052742    2411 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 08:10:54.057611    2411 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 08:10:54.137535    2411 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 08:10:54.223451    2411 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 08:10:54.223466    2411 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0520 08:10:54.228557    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:54.308135    2411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:10:55.458008    2411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.149858584s)
	I0520 08:10:55.458085    2411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:10:55.470559    2411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 08:10:55.490043    2411 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	I0520 08:10:55.490186    2411 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0520 08:10:55.491571    2411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:10:55.495134    2411 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0520 08:10:55.495181    2411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:10:55.503827    2411 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0520 08:10:55.503837    2411 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0520 08:10:55.503883    2411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:55.506693    2411 ssh_runner.go:195] Run: which lz4
	I0520 08:10:55.507788    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0520 08:10:55.507887    2411 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 08:10:55.508963    2411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 08:10:55.508974    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0520 08:10:57.210795    2411 docker.go:597] Took 1.702968 seconds to copy over tarball
	I0520 08:10:57.210859    2411 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 08:10:58.515065    2411 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.304194625s)
	I0520 08:10:58.515077    2411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 08:10:58.541632    2411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 08:10:58.548168    2411 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0520 08:10:58.555894    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 08:10:58.629686    2411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 08:11:00.230001    2411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.600301458s)
	I0520 08:11:00.230114    2411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 08:11:00.239444    2411 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0520 08:11:00.239452    2411 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0520 08:11:00.239456    2411 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 08:11:00.261783    2411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0520 08:11:00.261850    2411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:00.264665    2411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0520 08:11:00.264748    2411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0520 08:11:00.264799    2411 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0520 08:11:00.265143    2411 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 08:11:00.265439    2411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0520 08:11:00.265451    2411 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0520 08:11:00.269196    2411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:00.270366    2411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0520 08:11:00.271279    2411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0520 08:11:00.271962    2411 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0520 08:11:00.274111    2411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0520 08:11:00.274147    2411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0520 08:11:00.274161    2411 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 08:11:00.275626    2411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W0520 08:11:01.422185    2411 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:01.422283    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:01.430998    2411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 08:11:01.431043    2411 docker.go:313] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:01.431085    2411 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:01.444196    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0520 08:11:01.770640    2411 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:01.770752    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0520 08:11:01.778987    2411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0520 08:11:01.779012    2411 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0520 08:11:01.779058    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	W0520 08:11:01.779612    2411 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:01.779689    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0520 08:11:01.791912    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0520 08:11:01.791931    2411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0520 08:11:01.791949    2411 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0520 08:11:01.792002    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0520 08:11:01.799667    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0520 08:11:01.892138    2411 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:01.892254    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0520 08:11:01.899866    2411 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0520 08:11:01.899890    2411 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.7
	I0520 08:11:01.899941    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0520 08:11:01.912619    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0520 08:11:01.972306    2411 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:01.972434    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0520 08:11:01.981021    2411 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0520 08:11:01.981049    2411 docker.go:313] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0520 08:11:01.981105    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0520 08:11:01.988577    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0520 08:11:02.083312    2411 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:02.083428    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0520 08:11:02.091858    2411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0520 08:11:02.091882    2411 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0520 08:11:02.091940    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0520 08:11:02.099261    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0520 08:11:02.228312    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 08:11:02.244303    2411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0520 08:11:02.244338    2411 docker.go:313] Removing image: registry.k8s.io/pause:3.2
	I0520 08:11:02.244423    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0520 08:11:02.256354    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0520 08:11:02.428267    2411 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0520 08:11:02.428941    2411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0520 08:11:02.458431    2411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0520 08:11:02.458497    2411 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0520 08:11:02.458663    2411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0520 08:11:02.476778    2411 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0520 08:11:02.476840    2411 cache_images.go:92] LoadImages completed in 2.237381708s
	W0520 08:11:02.476977    2411 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
	I0520 08:11:02.477066    2411 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 08:11:02.495007    2411 cni.go:84] Creating CNI manager for ""
	I0520 08:11:02.495022    2411 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:11:02.495037    2411 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0520 08:11:02.495051    2411 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-371000 NodeName:ingress-addon-legacy-371000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 08:11:02.495192    2411 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-371000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 08:11:02.495253    2411 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-371000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0520 08:11:02.495352    2411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0520 08:11:02.499958    2411 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 08:11:02.500007    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 08:11:02.503638    2411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0520 08:11:02.510211    2411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0520 08:11:02.516056    2411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0520 08:11:02.521568    2411 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0520 08:11:02.522934    2411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 08:11:02.526704    2411 certs.go:56] Setting up /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000 for IP: 192.168.105.6
	I0520 08:11:02.526714    2411 certs.go:190] acquiring lock for shared ca certs: {Name:mk455286e32296d088c043e2094c607a8fa5e5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.527063    2411 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key
	I0520 08:11:02.527201    2411 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key
	I0520 08:11:02.527231    2411 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key
	I0520 08:11:02.527240    2411 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt with IP's: []
	I0520 08:11:02.604173    2411 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt ...
	I0520 08:11:02.604178    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: {Name:mk8dd7a78e89e2bdba945aba33cc1ed75d0932ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.604389    2411 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key ...
	I0520 08:11:02.604392    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key: {Name:mkc501f7860c9c1c7f94ab1cb70c699ed8d75bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.604512    2411 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key.b354f644
	I0520 08:11:02.604521    2411 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0520 08:11:02.645144    2411 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt.b354f644 ...
	I0520 08:11:02.645147    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt.b354f644: {Name:mka701a429665cf58f89e5041d12226f1400a541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.645284    2411 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key.b354f644 ...
	I0520 08:11:02.645287    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key.b354f644: {Name:mkeda105cc6d1e40a7b33fb1e38fecfd406815ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.645403    2411 certs.go:337] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt
	I0520 08:11:02.645503    2411 certs.go:341] copying /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key
	I0520 08:11:02.645588    2411 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.key
	I0520 08:11:02.645594    2411 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.crt with IP's: []
	I0520 08:11:02.749129    2411 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.crt ...
	I0520 08:11:02.749134    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.crt: {Name:mk047e841e500c96fdbfff75b1d017c05b59dbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.749335    2411 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.key ...
	I0520 08:11:02.749339    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.key: {Name:mkfb33f61c9a51adc4b42d80d765ee64effd78a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:02.749470    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 08:11:02.749486    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 08:11:02.749500    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 08:11:02.749518    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 08:11:02.749530    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 08:11:02.749542    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 08:11:02.749553    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 08:11:02.749565    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 08:11:02.749660    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437.pem (1338 bytes)
	W0520 08:11:02.750078    2411 certs.go:433] ignoring /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0520 08:11:02.750090    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 08:11:02.750117    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem (1082 bytes)
	I0520 08:11:02.750143    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem (1123 bytes)
	I0520 08:11:02.750171    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/certs/key.pem (1679 bytes)
	I0520 08:11:02.750235    2411 certs.go:437] found cert: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0520 08:11:02.750259    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:11:02.750272    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0520 08:11:02.750283    2411 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0520 08:11:02.750696    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0520 08:11:02.758429    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 08:11:02.765765    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 08:11:02.773044    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 08:11:02.779780    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 08:11:02.786504    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 08:11:02.793986    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 08:11:02.801292    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 08:11:02.808115    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 08:11:02.814807    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0520 08:11:02.821964    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0520 08:11:02.829118    2411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 08:11:02.834259    2411 ssh_runner.go:195] Run: openssl version
	I0520 08:11:02.836469    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 08:11:02.839406    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:11:02.840940    2411 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 20 15:04 /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:11:02.840963    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 08:11:02.842827    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 08:11:02.846247    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0520 08:11:02.849704    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0520 08:11:02.851191    2411 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 20 15:06 /usr/share/ca-certificates/1437.pem
	I0520 08:11:02.851211    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0520 08:11:02.852991    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0520 08:11:02.856025    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0520 08:11:02.858945    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0520 08:11:02.860574    2411 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 20 15:06 /usr/share/ca-certificates/14372.pem
	I0520 08:11:02.860592    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0520 08:11:02.862414    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 08:11:02.865697    2411 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0520 08:11:02.867168    2411 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0520 08:11:02.867204    2411 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-371000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:11:02.867274    2411 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 08:11:02.876543    2411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 08:11:02.879359    2411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 08:11:02.882313    2411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 08:11:02.885536    2411 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 08:11:02.885548    2411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0520 08:11:02.909698    2411 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0520 08:11:02.909779    2411 kubeadm.go:322] [preflight] Running pre-flight checks
	I0520 08:11:02.997429    2411 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 08:11:02.997489    2411 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 08:11:02.997561    2411 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 08:11:03.057593    2411 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 08:11:03.057657    2411 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 08:11:03.057677    2411 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0520 08:11:03.145444    2411 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 08:11:03.155691    2411 out.go:204]   - Generating certificates and keys ...
	I0520 08:11:03.155739    2411 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0520 08:11:03.155824    2411 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0520 08:11:03.214610    2411 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 08:11:03.288317    2411 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0520 08:11:03.398062    2411 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0520 08:11:03.635292    2411 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0520 08:11:03.787538    2411 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0520 08:11:03.787607    2411 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-371000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0520 08:11:03.880141    2411 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0520 08:11:03.880220    2411 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-371000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0520 08:11:04.023190    2411 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 08:11:04.125009    2411 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 08:11:04.251782    2411 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0520 08:11:04.251838    2411 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 08:11:04.384502    2411 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 08:11:04.482130    2411 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 08:11:04.747562    2411 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 08:11:05.036853    2411 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 08:11:05.037097    2411 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 08:11:05.041288    2411 out.go:204]   - Booting up control plane ...
	I0520 08:11:05.041361    2411 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 08:11:05.041401    2411 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 08:11:05.041522    2411 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 08:11:05.042057    2411 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 08:11:05.043454    2411 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 08:11:16.049727    2411 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005790 seconds
	I0520 08:11:16.049951    2411 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 08:11:16.066934    2411 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 08:11:16.600085    2411 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 08:11:16.600356    2411 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-371000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0520 08:11:17.110192    2411 kubeadm.go:322] [bootstrap-token] Using token: 04qynp.g86fdp8nouvs9odp
	I0520 08:11:17.113176    2411 out.go:204]   - Configuring RBAC rules ...
	I0520 08:11:17.113239    2411 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 08:11:17.114554    2411 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 08:11:17.119942    2411 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 08:11:17.122301    2411 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 08:11:17.124106    2411 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 08:11:17.124941    2411 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 08:11:17.130609    2411 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 08:11:17.317150    2411 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0520 08:11:17.517272    2411 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0520 08:11:17.517905    2411 kubeadm.go:322] 
	I0520 08:11:17.517944    2411 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0520 08:11:17.517947    2411 kubeadm.go:322] 
	I0520 08:11:17.517993    2411 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0520 08:11:17.518000    2411 kubeadm.go:322] 
	I0520 08:11:17.518014    2411 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0520 08:11:17.518056    2411 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 08:11:17.518090    2411 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 08:11:17.518093    2411 kubeadm.go:322] 
	I0520 08:11:17.518145    2411 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0520 08:11:17.518202    2411 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 08:11:17.518261    2411 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 08:11:17.518265    2411 kubeadm.go:322] 
	I0520 08:11:17.518321    2411 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 08:11:17.518375    2411 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0520 08:11:17.518380    2411 kubeadm.go:322] 
	I0520 08:11:17.518426    2411 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 04qynp.g86fdp8nouvs9odp \
	I0520 08:11:17.518497    2411 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 \
	I0520 08:11:17.518515    2411 kubeadm.go:322]     --control-plane 
	I0520 08:11:17.518524    2411 kubeadm.go:322] 
	I0520 08:11:17.518588    2411 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0520 08:11:17.518601    2411 kubeadm.go:322] 
	I0520 08:11:17.518656    2411 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 04qynp.g86fdp8nouvs9odp \
	I0520 08:11:17.518732    2411 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c637a44edb20ebaef1a3cd8bd36bb27010137a6bac525a779e19218d8d4ae1e6 
	I0520 08:11:17.518903    2411 kubeadm.go:322] W0520 15:11:03.241232    1564 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0520 08:11:17.519010    2411 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0520 08:11:17.519104    2411 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0520 08:11:17.519186    2411 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 08:11:17.519269    2411 kubeadm.go:322] W0520 15:11:05.372590    1564 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0520 08:11:17.519348    2411 kubeadm.go:322] W0520 15:11:05.373275    1564 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0520 08:11:17.519358    2411 cni.go:84] Creating CNI manager for ""
	I0520 08:11:17.519369    2411 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:11:17.519381    2411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 08:11:17.519503    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:17.519504    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033 minikube.k8s.io/name=ingress-addon-legacy-371000 minikube.k8s.io/updated_at=2023_05_20T08_11_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:17.589038    2411 ops.go:34] apiserver oom_adj: -16
	I0520 08:11:17.589072    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:18.132923    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:18.632929    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:19.133163    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:19.633171    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:20.133111    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:20.633052    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:21.133173    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:21.633073    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:22.133076    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:22.633019    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:23.133155    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:23.633119    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:24.132942    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:24.632952    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:25.133066    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:25.633159    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:26.132909    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:26.633113    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:27.133056    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:27.632876    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:28.133099    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:28.633148    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:29.133048    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:29.633127    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:30.133150    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:30.633143    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:31.132810    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:31.633090    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:32.132915    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:32.632875    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:33.131174    2411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 08:11:33.190258    2411 kubeadm.go:1076] duration metric: took 15.670858167s to wait for elevateKubeSystemPrivileges.
	I0520 08:11:33.190273    2411 kubeadm.go:406] StartCluster complete in 30.323128625s
	I0520 08:11:33.190287    2411 settings.go:142] acquiring lock: {Name:mk59154b06c6365bdac4601706e783ce490a045a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:33.190378    2411 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:11:33.191476    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/kubeconfig: {Name:mkf6fa7fb711448995f7c2c1a6e60e631893d6a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:11:33.191666    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 08:11:33.191755    2411 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0520 08:11:33.191821    2411 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-371000"
	I0520 08:11:33.191829    2411 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-371000"
	I0520 08:11:33.191851    2411 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-371000"
	I0520 08:11:33.191855    2411 host.go:66] Checking if "ingress-addon-legacy-371000" exists ...
	I0520 08:11:33.191863    2411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-371000"
	I0520 08:11:33.191944    2411 config.go:182] Loaded profile config "ingress-addon-legacy-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0520 08:11:33.191946    2411 kapi.go:59] client config for ingress-addon-legacy-371000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key", CAFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f568e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 08:11:33.192340    2411 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 08:11:33.192988    2411 kapi.go:59] client config for ingress-addon-legacy-371000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key", CAFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f568e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 08:11:33.197536    2411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:11:33.200539    2411 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 08:11:33.200546    2411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 08:11:33.200554    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:11:33.204304    2411 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-371000"
	I0520 08:11:33.204321    2411 host.go:66] Checking if "ingress-addon-legacy-371000" exists ...
	I0520 08:11:33.205019    2411 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 08:11:33.205026    2411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 08:11:33.205031    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/ingress-addon-legacy-371000/id_rsa Username:docker}
	I0520 08:11:33.255085    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 08:11:33.290476    2411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 08:11:33.297516    2411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 08:11:33.458840    2411 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0520 08:11:33.528330    2411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0520 08:11:33.536415    2411 addons.go:499] enable addons completed in 344.70975ms: enabled=[default-storageclass storage-provisioner]
	I0520 08:11:33.710163    2411 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-371000" context rescaled to 1 replicas
	I0520 08:11:33.710182    2411 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:11:33.714304    2411 out.go:177] * Verifying Kubernetes components...
	I0520 08:11:33.718208    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 08:11:33.723624    2411 kapi.go:59] client config for ingress-addon-legacy-371000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.key", CAFile:"/Users/jenkins/minikube-integration/16543-1012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105f568e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 08:11:33.723762    2411 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-371000" to be "Ready" ...
	I0520 08:11:33.725522    2411 node_ready.go:49] node "ingress-addon-legacy-371000" has status "Ready":"True"
	I0520 08:11:33.725528    2411 node_ready.go:38] duration metric: took 1.755958ms waiting for node "ingress-addon-legacy-371000" to be "Ready" ...
	I0520 08:11:33.725531    2411 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 08:11:33.728878    2411 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8756c" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:35.744575    2411 pod_ready.go:102] pod "coredns-66bff467f8-8756c" in "kube-system" namespace has status "Ready":"False"
	I0520 08:11:37.745816    2411 pod_ready.go:102] pod "coredns-66bff467f8-8756c" in "kube-system" namespace has status "Ready":"False"
	I0520 08:11:39.747210    2411 pod_ready.go:102] pod "coredns-66bff467f8-8756c" in "kube-system" namespace has status "Ready":"False"
	I0520 08:11:41.245329    2411 pod_ready.go:92] pod "coredns-66bff467f8-8756c" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.245364    2411 pod_ready.go:81] duration metric: took 7.516485667s waiting for pod "coredns-66bff467f8-8756c" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.245381    2411 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.252402    2411 pod_ready.go:92] pod "etcd-ingress-addon-legacy-371000" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.252420    2411 pod_ready.go:81] duration metric: took 7.028875ms waiting for pod "etcd-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.252432    2411 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.258615    2411 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-371000" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.258630    2411 pod_ready.go:81] duration metric: took 6.189833ms waiting for pod "kube-apiserver-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.258641    2411 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.266365    2411 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-371000" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.266390    2411 pod_ready.go:81] duration metric: took 7.737792ms waiting for pod "kube-controller-manager-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.266402    2411 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k65p6" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.270569    2411 pod_ready.go:92] pod "kube-proxy-k65p6" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.270584    2411 pod_ready.go:81] duration metric: took 4.169875ms waiting for pod "kube-proxy-k65p6" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.270592    2411 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.434915    2411 request.go:628] Waited for 164.226083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-371000
	I0520 08:11:41.633613    2411 request.go:628] Waited for 192.744875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-371000
	I0520 08:11:41.640160    2411 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-371000" in "kube-system" namespace has status "Ready":"True"
	I0520 08:11:41.640188    2411 pod_ready.go:81] duration metric: took 369.586083ms waiting for pod "kube-scheduler-ingress-addon-legacy-371000" in "kube-system" namespace to be "Ready" ...
	I0520 08:11:41.640219    2411 pod_ready.go:38] duration metric: took 7.914687375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 08:11:41.640276    2411 api_server.go:52] waiting for apiserver process to appear ...
	I0520 08:11:41.640589    2411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 08:11:41.658214    2411 api_server.go:72] duration metric: took 7.94802075s to wait for apiserver process to appear ...
	I0520 08:11:41.658237    2411 api_server.go:88] waiting for apiserver healthz status ...
	I0520 08:11:41.658266    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0520 08:11:41.667805    2411 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0520 08:11:41.668865    2411 api_server.go:141] control plane version: v1.18.20
	I0520 08:11:41.668889    2411 api_server.go:131] duration metric: took 10.63925ms to wait for apiserver health ...
	I0520 08:11:41.668918    2411 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 08:11:41.834909    2411 request.go:628] Waited for 165.891833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0520 08:11:41.849224    2411 system_pods.go:59] 7 kube-system pods found
	I0520 08:11:41.849256    2411 system_pods.go:61] "coredns-66bff467f8-8756c" [2eec04c4-2fdc-428f-bdab-dfb247718b98] Running
	I0520 08:11:41.849267    2411 system_pods.go:61] "etcd-ingress-addon-legacy-371000" [9b88ca73-b486-4840-ba4c-571e34472372] Running
	I0520 08:11:41.849292    2411 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-371000" [afde7e53-c7bc-4439-a6c3-7492db9e75dc] Running
	I0520 08:11:41.849306    2411 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-371000" [832a399e-3dab-40e4-8659-03766dfb9bb9] Running
	I0520 08:11:41.849319    2411 system_pods.go:61] "kube-proxy-k65p6" [a4f06e0c-ed08-454b-a71f-e508e9f907f9] Running
	I0520 08:11:41.849332    2411 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-371000" [6457eab4-e622-4b7e-94b7-05acf4fe5a02] Running
	I0520 08:11:41.849344    2411 system_pods.go:61] "storage-provisioner" [b44b6fe0-3a04-41f7-98bd-dcdc7a2ace82] Running
	I0520 08:11:41.849353    2411 system_pods.go:74] duration metric: took 180.426458ms to wait for pod list to return data ...
	I0520 08:11:41.849376    2411 default_sa.go:34] waiting for default service account to be created ...
	I0520 08:11:42.034900    2411 request.go:628] Waited for 185.38825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0520 08:11:42.040087    2411 default_sa.go:45] found service account: "default"
	I0520 08:11:42.040112    2411 default_sa.go:55] duration metric: took 190.723917ms for default service account to be created ...
	I0520 08:11:42.040125    2411 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 08:11:42.234835    2411 request.go:628] Waited for 194.58975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0520 08:11:42.248572    2411 system_pods.go:86] 7 kube-system pods found
	I0520 08:11:42.248610    2411 system_pods.go:89] "coredns-66bff467f8-8756c" [2eec04c4-2fdc-428f-bdab-dfb247718b98] Running
	I0520 08:11:42.248622    2411 system_pods.go:89] "etcd-ingress-addon-legacy-371000" [9b88ca73-b486-4840-ba4c-571e34472372] Running
	I0520 08:11:42.248632    2411 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-371000" [afde7e53-c7bc-4439-a6c3-7492db9e75dc] Running
	I0520 08:11:42.248646    2411 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-371000" [832a399e-3dab-40e4-8659-03766dfb9bb9] Running
	I0520 08:11:42.248661    2411 system_pods.go:89] "kube-proxy-k65p6" [a4f06e0c-ed08-454b-a71f-e508e9f907f9] Running
	I0520 08:11:42.248671    2411 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-371000" [6457eab4-e622-4b7e-94b7-05acf4fe5a02] Running
	I0520 08:11:42.248682    2411 system_pods.go:89] "storage-provisioner" [b44b6fe0-3a04-41f7-98bd-dcdc7a2ace82] Running
	I0520 08:11:42.248696    2411 system_pods.go:126] duration metric: took 208.562708ms to wait for k8s-apps to be running ...
	I0520 08:11:42.248718    2411 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 08:11:42.248943    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 08:11:42.267945    2411 system_svc.go:56] duration metric: took 19.226041ms WaitForService to wait for kubelet.
	I0520 08:11:42.267968    2411 kubeadm.go:581] duration metric: took 8.557787583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0520 08:11:42.267991    2411 node_conditions.go:102] verifying NodePressure condition ...
	I0520 08:11:42.434945    2411 request.go:628] Waited for 166.804541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0520 08:11:42.443380    2411 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0520 08:11:42.443430    2411 node_conditions.go:123] node cpu capacity is 2
	I0520 08:11:42.443464    2411 node_conditions.go:105] duration metric: took 175.455542ms to run NodePressure ...
	I0520 08:11:42.443489    2411 start.go:228] waiting for startup goroutines ...
	I0520 08:11:42.443506    2411 start.go:233] waiting for cluster config update ...
	I0520 08:11:42.443531    2411 start.go:242] writing updated cluster config ...
	I0520 08:11:42.445027    2411 ssh_runner.go:195] Run: rm -f paused
	I0520 08:11:42.566579    2411 start.go:568] kubectl: 1.25.9, cluster: 1.18.20 (minor skew: 7)
	I0520 08:11:42.570946    2411 out.go:177] 
	W0520 08:11:42.574949    2411 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.18.20.
	I0520 08:11:42.578852    2411 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0520 08:11:42.589825    2411 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-371000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-05-20 15:10:45 UTC, ends at Sat 2023-05-20 15:12:51 UTC. --
	May 20 15:12:27 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:27.837693850Z" level=info msg="shim disconnected" id=d4465ecac2ad8d7a550a1ac69ad6a5ee312b4c46ea721a22863c58bae413e8ef namespace=moby
	May 20 15:12:27 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:27.837736518Z" level=warning msg="cleaning up after shim disconnected" id=d4465ecac2ad8d7a550a1ac69ad6a5ee312b4c46ea721a22863c58bae413e8ef namespace=moby
	May 20 15:12:27 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:27.837741310Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:12:42 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:42.072259440Z" level=info msg="ignoring event" container=42da86a655e9a2337707ceabcb139f35c6644f49a543783da285e2fbebae7b45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:12:42 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:42.072624409Z" level=info msg="shim disconnected" id=42da86a655e9a2337707ceabcb139f35c6644f49a543783da285e2fbebae7b45 namespace=moby
	May 20 15:12:42 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:42.072659993Z" level=warning msg="cleaning up after shim disconnected" id=42da86a655e9a2337707ceabcb139f35c6644f49a543783da285e2fbebae7b45 namespace=moby
	May 20 15:12:42 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:42.072667077Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.070451465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.070704014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.070720098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.070789100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:43.106675470Z" level=info msg="ignoring event" container=853f3c51575ddd256f2d93e9cb8634eebdb3a070394884ae831d44e93d7f6b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.106851933Z" level=info msg="shim disconnected" id=853f3c51575ddd256f2d93e9cb8634eebdb3a070394884ae831d44e93d7f6b32 namespace=moby
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.106876558Z" level=warning msg="cleaning up after shim disconnected" id=853f3c51575ddd256f2d93e9cb8634eebdb3a070394884ae831d44e93d7f6b32 namespace=moby
	May 20 15:12:43 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:43.106880725Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:46.508519925Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=7fc9fbce956df23726b84c6e920ce88c2b7bbc4296d0e78a92aeb2349e6cef38
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:46.535662897Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=7fc9fbce956df23726b84c6e920ce88c2b7bbc4296d0e78a92aeb2349e6cef38
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.606028228Z" level=info msg="shim disconnected" id=7fc9fbce956df23726b84c6e920ce88c2b7bbc4296d0e78a92aeb2349e6cef38 namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.606113147Z" level=warning msg="cleaning up after shim disconnected" id=7fc9fbce956df23726b84c6e920ce88c2b7bbc4296d0e78a92aeb2349e6cef38 namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.606122230Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:46.606503699Z" level=info msg="ignoring event" container=7fc9fbce956df23726b84c6e920ce88c2b7bbc4296d0e78a92aeb2349e6cef38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.645107767Z" level=info msg="shim disconnected" id=28f0cf39cfbb68e1cdb2f2013b106a2940c324cf6d0d8d0a15feacabf121b119 namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.645138935Z" level=warning msg="cleaning up after shim disconnected" id=28f0cf39cfbb68e1cdb2f2013b106a2940c324cf6d0d8d0a15feacabf121b119 namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1242]: time="2023-05-20T15:12:46.645143560Z" level=info msg="cleaning up dead shim" namespace=moby
	May 20 15:12:46 ingress-addon-legacy-371000 dockerd[1235]: time="2023-05-20T15:12:46.645219437Z" level=info msg="ignoring event" container=28f0cf39cfbb68e1cdb2f2013b106a2940c324cf6d0d8d0a15feacabf121b119 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	853f3c51575dd       13753a81eccfd                                                                                                      8 seconds ago        Exited              hello-world-app           2                   b45168fbed5f8
	db9d43c894a4c       nginx@sha256:02ffd439b71d9ea9408e449b568f65c0bbbb94bebd8750f1d80231ab6496008e                                      34 seconds ago       Running             nginx                     0                   8691dc962dc63
	7fc9fbce956df       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   55 seconds ago       Exited              controller                0                   28f0cf39cfbb6
	ea0087f565eb8       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   b39678463e201
	27f1276731a37       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   61b5509e77d9b
	e0363764c55e6       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   59c153793acf0
	13f01b9373301       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   8cd20e4ad6c11
	4a543cd77f6f5       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   970a9a7543efc
	5003745eaafd6       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   ae24c6ce42c52
	0b464588143a4       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   1ad5182f0974e
	75b7cd98378ed       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   4a52ce869bad9
	ffd97d080490b       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   1caa8726b8d34
	
	* 
	* ==> coredns [13f01b937330] <==
	* [INFO] 172.17.0.1:44095 - 1773 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032709s
	[INFO] 172.17.0.1:14254 - 55480 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033793s
	[INFO] 172.17.0.1:44095 - 19729 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032126s
	[INFO] 172.17.0.1:14254 - 10839 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011833s
	[INFO] 172.17.0.1:44095 - 40584 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029126s
	[INFO] 172.17.0.1:14254 - 11094 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011293s
	[INFO] 172.17.0.1:44095 - 42895 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002846s
	[INFO] 172.17.0.1:14254 - 60311 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011084s
	[INFO] 172.17.0.1:14254 - 36546 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012417s
	[INFO] 172.17.0.1:44095 - 54324 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048918s
	[INFO] 172.17.0.1:14254 - 35665 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012209s
	[INFO] 172.17.0.1:31807 - 51117 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044044s
	[INFO] 172.17.0.1:64480 - 26444 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028209s
	[INFO] 172.17.0.1:31807 - 25437 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018793s
	[INFO] 172.17.0.1:64480 - 43938 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019251s
	[INFO] 172.17.0.1:64480 - 52899 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000018459s
	[INFO] 172.17.0.1:31807 - 5367 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003621s
	[INFO] 172.17.0.1:31807 - 61738 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015917s
	[INFO] 172.17.0.1:64480 - 2278 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016917s
	[INFO] 172.17.0.1:31807 - 7720 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031792s
	[INFO] 172.17.0.1:64480 - 43249 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017876s
	[INFO] 172.17.0.1:64480 - 7874 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015459s
	[INFO] 172.17.0.1:31807 - 12846 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037168s
	[INFO] 172.17.0.1:64480 - 50735 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004596s
	[INFO] 172.17.0.1:31807 - 9379 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045252s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-371000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-371000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=24686ce6bbd657e092eb3c3fd6be64c1b7241033
	                    minikube.k8s.io/name=ingress-addon-legacy-371000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_20T08_11_17_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 20 May 2023 15:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-371000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 20 May 2023 15:12:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 20 May 2023 15:12:24 +0000   Sat, 20 May 2023 15:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 20 May 2023 15:12:24 +0000   Sat, 20 May 2023 15:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 20 May 2023 15:12:24 +0000   Sat, 20 May 2023 15:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 20 May 2023 15:12:24 +0000   Sat, 20 May 2023 15:11:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-371000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4004084Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4004084Ki
	  pods:               110
	System Info:
	  Machine ID:                 d77c379fb93f4809a9956253aa22d8c1
	  System UUID:                d77c379fb93f4809a9956253aa22d8c1
	  Boot ID:                    d1bc664f-ed28-4deb-a390-e0440eb1ee50
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-fnflz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 coredns-66bff467f8-8756c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-371000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-371000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-371000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-k65p6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-371000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 88s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s   kubelet     Node ingress-addon-legacy-371000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s   kubelet     Node ingress-addon-legacy-371000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s   kubelet     Node ingress-addon-legacy-371000 status is now: NodeHasSufficientPID
	  Normal  Starting                 78s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [May20 15:10] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.661755] EINJ: EINJ table not found.
	[  +0.511169] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.044301] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000802] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.202647] systemd-fstab-generator[473]: Ignoring "noauto" for root device
	[  +0.072273] systemd-fstab-generator[484]: Ignoring "noauto" for root device
	[  +3.474063] systemd-fstab-generator[776]: Ignoring "noauto" for root device
	[  +1.487712] systemd-fstab-generator[937]: Ignoring "noauto" for root device
	[  +0.174283] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.085678] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.086054] systemd-fstab-generator[1013]: Ignoring "noauto" for root device
	[  +1.140233] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.175048] systemd-fstab-generator[1211]: Ignoring "noauto" for root device
	[May20 15:11] systemd-fstab-generator[1694]: Ignoring "noauto" for root device
	[  +8.116176] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.079489] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.879268] systemd-fstab-generator[2786]: Ignoring "noauto" for root device
	[ +16.590637] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.098148] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.338537] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[May20 15:12] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [75b7cd98378e] <==
	* raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/05/20 15:11:12 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-05-20 15:11:12.488148 W | auth: simple token is not cryptographically signed
	2023-05-20 15:11:12.489332 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-05-20 15:11:12.490184 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-05-20 15:11:12.490252 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-05-20 15:11:12.490351 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-05-20 15:11:12.490558 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-05-20 15:11:12.490730 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/05/20 15:11:12 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/05/20 15:11:12 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-05-20 15:11:12.803451 I | etcdserver: setting up the initial cluster version to 3.4
	2023-05-20 15:11:12.823434 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-05-20 15:11:12.827444 I | etcdserver/api: enabled capabilities for version 3.4
	2023-05-20 15:11:12.830459 I | etcdserver: published {Name:ingress-addon-legacy-371000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-05-20 15:11:12.830480 I | embed: ready to serve client requests
	2023-05-20 15:11:12.830520 I | embed: ready to serve client requests
	2023-05-20 15:11:12.838803 I | embed: serving client requests on 127.0.0.1:2379
	2023-05-20 15:11:12.939721 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  15:12:51 up 2 min,  0 users,  load average: 0.74, 0.24, 0.08
	Linux ingress-addon-legacy-371000 5.10.57 #1 SMP PREEMPT Mon May 15 19:29:44 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0b464588143a] <==
	* I0520 15:11:14.997755       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0520 15:11:15.010597       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0520 15:11:15.081250       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 15:11:15.081262       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0520 15:11:15.082932       1 cache.go:39] Caches are synced for autoregister controller
	I0520 15:11:15.083066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 15:11:15.102708       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0520 15:11:15.980081       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0520 15:11:15.980535       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 15:11:15.994000       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0520 15:11:16.003626       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0520 15:11:16.003654       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0520 15:11:16.142250       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 15:11:16.152395       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0520 15:11:16.247885       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0520 15:11:16.248332       1 controller.go:609] quota admission added evaluator for: endpoints
	I0520 15:11:16.249891       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 15:11:17.286951       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0520 15:11:17.643949       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0520 15:11:17.843335       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0520 15:11:23.978758       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 15:11:32.698162       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0520 15:11:33.316751       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0520 15:11:42.861363       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0520 15:12:14.499941       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ffd97d080490] <==
	* I0520 15:11:33.017251       1 shared_informer.go:230] Caches are synced for attach detach 
	I0520 15:11:33.044918       1 shared_informer.go:230] Caches are synced for service account 
	I0520 15:11:33.088426       1 shared_informer.go:230] Caches are synced for namespace 
	I0520 15:11:33.156217       1 shared_informer.go:230] Caches are synced for job 
	I0520 15:11:33.242016       1 shared_informer.go:230] Caches are synced for disruption 
	I0520 15:11:33.242028       1 disruption.go:339] Sending events to api server.
	I0520 15:11:33.245273       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0520 15:11:33.245286       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0520 15:11:33.252698       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0520 15:11:33.264728       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"00a49698-3b4c-4d10-9035-6137b9ffd208", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0520 15:11:33.274506       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"291c0fc9-194e-42c0-8284-338ad47b5787", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-m2lfb
	I0520 15:11:33.284059       1 shared_informer.go:230] Caches are synced for stateful set 
	I0520 15:11:33.293798       1 shared_informer.go:230] Caches are synced for resource quota 
	I0520 15:11:33.314560       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0520 15:11:33.319167       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"13934369-266a-4b56-ab47-d82d1542d56d", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-k65p6
	I0520 15:11:33.343850       1 shared_informer.go:230] Caches are synced for resource quota 
	I0520 15:11:42.841551       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c6fc4d8e-7c76-45d5-ade4-44bffd79a5a4", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0520 15:11:42.845739       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f9342773-c9d4-4cc8-a1eb-f2e14a574dbb", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-tnqdl
	I0520 15:11:42.875049       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"597e5621-a970-488b-ab8f-c3c6d0952768", APIVersion:"batch/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-b9f5l
	I0520 15:11:42.875154       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d78a6c40-90d3-4c52-b02c-c2d0c34ccffe", APIVersion:"batch/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-cx6fw
	I0520 15:11:46.277201       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d78a6c40-90d3-4c52-b02c-c2d0c34ccffe", APIVersion:"batch/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0520 15:11:46.291148       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"597e5621-a970-488b-ab8f-c3c6d0952768", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0520 15:12:24.803215       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"0ac766d0-63ad-430e-abb5-58d6c61c24bc", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0520 15:12:24.807837       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d74110ce-56d4-4be9-bfed-42f0f3e02cf0", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-fnflz
	E0520 15:12:49.273298       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-t7dp2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [4a543cd77f6f] <==
	* W0520 15:11:33.814655       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0520 15:11:33.818423       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0520 15:11:33.818438       1 server_others.go:186] Using iptables Proxier.
	I0520 15:11:33.818656       1 server.go:583] Version: v1.18.20
	I0520 15:11:33.820004       1 config.go:315] Starting service config controller
	I0520 15:11:33.820068       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0520 15:11:33.821036       1 config.go:133] Starting endpoints config controller
	I0520 15:11:33.821144       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0520 15:11:33.920280       1 shared_informer.go:230] Caches are synced for service config 
	I0520 15:11:33.921630       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5003745eaafd] <==
	* I0520 15:11:15.025246       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0520 15:11:15.026479       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0520 15:11:15.026528       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 15:11:15.029282       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 15:11:15.026535       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0520 15:11:15.030201       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 15:11:15.030375       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 15:11:15.030870       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 15:11:15.030895       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 15:11:15.030916       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 15:11:15.030937       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 15:11:15.030956       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 15:11:15.030977       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 15:11:15.030996       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 15:11:15.031610       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 15:11:15.031645       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 15:11:15.031952       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 15:11:16.008022       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 15:11:16.053339       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 15:11:16.058721       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 15:11:16.079320       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 15:11:16.096551       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0520 15:11:16.335436       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0520 15:11:32.727650       1 factory.go:503] pod: kube-system/coredns-66bff467f8-m2lfb is already present in the active queue
	E0520 15:11:32.732967       1 factory.go:503] pod: kube-system/coredns-66bff467f8-8756c is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-05-20 15:10:45 UTC, ends at Sat 2023-05-20 15:12:51 UTC. --
	May 20 15:12:29 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:29.804664    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d4465ecac2ad8d7a550a1ac69ad6a5ee312b4c46ea721a22863c58bae413e8ef
	May 20 15:12:29 ingress-addon-legacy-371000 kubelet[2792]: E0520 15:12:29.805160    2792 pod_workers.go:191] Error syncing pod 98c4baf6-9329-43ed-b52d-c0b85d658313 ("hello-world-app-5f5d8b66bb-fnflz_default(98c4baf6-9329-43ed-b52d-c0b85d658313)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fnflz_default(98c4baf6-9329-43ed-b52d-c0b85d658313)"
	May 20 15:12:32 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:32.021998    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a25e115d4f7c286b2e73f601572db0d65e6de17491d438c2e49f7c3b3e935aa9
	May 20 15:12:32 ingress-addon-legacy-371000 kubelet[2792]: E0520 15:12:32.023719    2792 pod_workers.go:191] Error syncing pod 4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2 ("kube-ingress-dns-minikube_kube-system(4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2)"
	May 20 15:12:40 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:40.207387    2792 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-9z2tl" (UniqueName: "kubernetes.io/secret/4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2-minikube-ingress-dns-token-9z2tl") pod "4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2" (UID: "4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2")
	May 20 15:12:40 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:40.210491    2792 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2-minikube-ingress-dns-token-9z2tl" (OuterVolumeSpecName: "minikube-ingress-dns-token-9z2tl") pod "4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2" (UID: "4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2"). InnerVolumeSpecName "minikube-ingress-dns-token-9z2tl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 15:12:40 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:40.312546    2792 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-9z2tl" (UniqueName: "kubernetes.io/secret/4ed4fe1a-c0b6-45d3-a13f-368ddd542ee2-minikube-ingress-dns-token-9z2tl") on node "ingress-addon-legacy-371000" DevicePath ""
	May 20 15:12:42 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:42.998877    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a25e115d4f7c286b2e73f601572db0d65e6de17491d438c2e49f7c3b3e935aa9
	May 20 15:12:43 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:43.021713    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d4465ecac2ad8d7a550a1ac69ad6a5ee312b4c46ea721a22863c58bae413e8ef
	May 20 15:12:43 ingress-addon-legacy-371000 kubelet[2792]: W0520 15:12:43.125185    2792 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod98c4baf6-9329-43ed-b52d-c0b85d658313/853f3c51575ddd256f2d93e9cb8634eebdb3a070394884ae831d44e93d7f6b32": none of the resources are being tracked.
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: W0520 15:12:44.026928    2792 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-fnflz through plugin: invalid network status for
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:44.052257    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 853f3c51575ddd256f2d93e9cb8634eebdb3a070394884ae831d44e93d7f6b32
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: E0520 15:12:44.052465    2792 pod_workers.go:191] Error syncing pod 98c4baf6-9329-43ed-b52d-c0b85d658313 ("hello-world-app-5f5d8b66bb-fnflz_default(98c4baf6-9329-43ed-b52d-c0b85d658313)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fnflz_default(98c4baf6-9329-43ed-b52d-c0b85d658313)"
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:44.052759    2792 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d4465ecac2ad8d7a550a1ac69ad6a5ee312b4c46ea721a22863c58bae413e8ef
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: E0520 15:12:44.501210    2792 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tnqdl.1760e2bd94113a9f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tnqdl", UID:"409951a3-0492-4c3a-9287-2eb0fafcebec", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-371000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11256bb1da6029f, ext:86878391897, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11256bb1da6029f, ext:86878391897, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tnqdl.1760e2bd94113a9f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 20 15:12:44 ingress-addon-legacy-371000 kubelet[2792]: E0520 15:12:44.532387    2792 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tnqdl.1760e2bd94113a9f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tnqdl", UID:"409951a3-0492-4c3a-9287-2eb0fafcebec", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-371000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11256bb1da6029f, ext:86878391897, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11256bb1f93babf, ext:86910748281, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tnqdl.1760e2bd94113a9f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 20 15:12:45 ingress-addon-legacy-371000 kubelet[2792]: W0520 15:12:45.054706    2792 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-fnflz through plugin: invalid network status for
	May 20 15:12:47 ingress-addon-legacy-371000 kubelet[2792]: W0520 15:12:47.091712    2792 pod_container_deletor.go:77] Container "28f0cf39cfbb68e1cdb2f2013b106a2940c324cf6d0d8d0a15feacabf121b119" not found in pod's containers
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.719607    2792 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-webhook-cert") pod "409951a3-0492-4c3a-9287-2eb0fafcebec" (UID: "409951a3-0492-4c3a-9287-2eb0fafcebec")
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.720604    2792 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2r84k" (UniqueName: "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-ingress-nginx-token-2r84k") pod "409951a3-0492-4c3a-9287-2eb0fafcebec" (UID: "409951a3-0492-4c3a-9287-2eb0fafcebec")
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.726403    2792 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "409951a3-0492-4c3a-9287-2eb0fafcebec" (UID: "409951a3-0492-4c3a-9287-2eb0fafcebec"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.727843    2792 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-ingress-nginx-token-2r84k" (OuterVolumeSpecName: "ingress-nginx-token-2r84k") pod "409951a3-0492-4c3a-9287-2eb0fafcebec" (UID: "409951a3-0492-4c3a-9287-2eb0fafcebec"). InnerVolumeSpecName "ingress-nginx-token-2r84k". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.821213    2792 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2r84k" (UniqueName: "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-ingress-nginx-token-2r84k") on node "ingress-addon-legacy-371000" DevicePath ""
	May 20 15:12:48 ingress-addon-legacy-371000 kubelet[2792]: I0520 15:12:48.821317    2792 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/409951a3-0492-4c3a-9287-2eb0fafcebec-webhook-cert") on node "ingress-addon-legacy-371000" DevicePath ""
	May 20 15:12:50 ingress-addon-legacy-371000 kubelet[2792]: W0520 15:12:50.038030    2792 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/409951a3-0492-4c3a-9287-2eb0fafcebec/volumes" does not exist
	
	* 
	* ==> storage-provisioner [e0363764c55e] <==
	* I0520 15:11:36.484693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 15:11:36.489006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 15:11:36.489023       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 15:11:36.491687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 15:11:36.491919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e567354-0a31-4d4e-a1f3-029a762560fb", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-371000_6ce73e8e-fe9f-4a20-9c52-7adf0904802c became leader
	I0520 15:11:36.493657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-371000_6ce73e8e-fe9f-4a20-9c52-7adf0904802c!
	I0520 15:11:36.594617       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-371000_6ce73e8e-fe9f-4a20-9c52-7adf0904802c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-371000 -n ingress-addon-legacy-371000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-371000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.359012375s)

                                                
                                                
-- stdout --
	* [mount-start-1-937000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-937000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-937000 -n mount-start-1-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-937000 -n mount-start-1-937000: exit status 7 (66.492792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-937000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-046000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-046000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.846224583s)

                                                
                                                
-- stdout --
	* [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-046000 in cluster multinode-046000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-046000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:15:30.544007    2822 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:15:30.544133    2822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:15:30.544136    2822 out.go:309] Setting ErrFile to fd 2...
	I0520 08:15:30.544144    2822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:15:30.544212    2822 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:15:30.545249    2822 out.go:303] Setting JSON to false
	I0520 08:15:30.561064    2822 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":901,"bootTime":1684594829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:15:30.561132    2822 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:15:30.566392    2822 out.go:177] * [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:15:30.574354    2822 notify.go:220] Checking for updates...
	I0520 08:15:30.577387    2822 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:15:30.580376    2822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:15:30.583287    2822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:15:30.586316    2822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:15:30.592305    2822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:15:30.595389    2822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:15:30.598529    2822 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:15:30.601383    2822 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:15:30.608333    2822 start.go:295] selected driver: qemu2
	I0520 08:15:30.608346    2822 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:15:30.608354    2822 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:15:30.610276    2822 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:15:30.611759    2822 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:15:30.614420    2822 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:15:30.614449    2822 cni.go:84] Creating CNI manager for ""
	I0520 08:15:30.614454    2822 cni.go:136] 0 nodes found, recommending kindnet
	I0520 08:15:30.614458    2822 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 08:15:30.614469    2822 start_flags.go:319] config:
	{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:15:30.614555    2822 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:15:30.623337    2822 out.go:177] * Starting control plane node multinode-046000 in cluster multinode-046000
	I0520 08:15:30.627339    2822 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:15:30.627362    2822 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:15:30.627376    2822 cache.go:57] Caching tarball of preloaded images
	I0520 08:15:30.627440    2822 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:15:30.627446    2822 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:15:30.627667    2822 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/multinode-046000/config.json ...
	I0520 08:15:30.627681    2822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/multinode-046000/config.json: {Name:mk825d91868bead5031bc83c2d2f5ea795b1cde2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:15:30.627884    2822 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:15:30.627900    2822 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:15:30.627930    2822 start.go:368] acquired machines lock for "multinode-046000" in 25.5µs
	I0520 08:15:30.627948    2822 start.go:93] Provisioning new machine with config: &{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:15:30.627987    2822 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:15:30.636314    2822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:15:30.653781    2822 start.go:159] libmachine.API.Create for "multinode-046000" (driver="qemu2")
	I0520 08:15:30.653814    2822 client.go:168] LocalClient.Create starting
	I0520 08:15:30.653883    2822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:15:30.653903    2822 main.go:141] libmachine: Decoding PEM data...
	I0520 08:15:30.653916    2822 main.go:141] libmachine: Parsing certificate...
	I0520 08:15:30.653965    2822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:15:30.653984    2822 main.go:141] libmachine: Decoding PEM data...
	I0520 08:15:30.653993    2822 main.go:141] libmachine: Parsing certificate...
	I0520 08:15:30.654352    2822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:15:30.784672    2822 main.go:141] libmachine: Creating SSH key...
	I0520 08:15:30.974047    2822 main.go:141] libmachine: Creating Disk image...
	I0520 08:15:30.974054    2822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:15:30.974215    2822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:30.983260    2822 main.go:141] libmachine: STDOUT: 
	I0520 08:15:30.983274    2822 main.go:141] libmachine: STDERR: 
	I0520 08:15:30.983334    2822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2 +20000M
	I0520 08:15:30.990542    2822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:15:30.990564    2822 main.go:141] libmachine: STDERR: 
	I0520 08:15:30.990580    2822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:30.990586    2822 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:15:30.990625    2822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:c9:bf:10:2b:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:30.992162    2822 main.go:141] libmachine: STDOUT: 
	I0520 08:15:30.992173    2822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:15:30.992190    2822 client.go:171] LocalClient.Create took 338.370333ms
	I0520 08:15:32.994398    2822 start.go:128] duration metric: createHost completed in 2.366380875s
	I0520 08:15:32.994500    2822 start.go:83] releasing machines lock for "multinode-046000", held for 2.36656275s
	W0520 08:15:32.994560    2822 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:15:33.002170    2822 out.go:177] * Deleting "multinode-046000" in qemu2 ...
	W0520 08:15:33.019240    2822 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:15:33.019268    2822 start.go:702] Will try again in 5 seconds ...
	I0520 08:15:38.021514    2822 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:15:38.022023    2822 start.go:368] acquired machines lock for "multinode-046000" in 410.625µs
	I0520 08:15:38.022139    2822 start.go:93] Provisioning new machine with config: &{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:15:38.022466    2822 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:15:38.034982    2822 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:15:38.085095    2822 start.go:159] libmachine.API.Create for "multinode-046000" (driver="qemu2")
	I0520 08:15:38.085140    2822 client.go:168] LocalClient.Create starting
	I0520 08:15:38.085267    2822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:15:38.085322    2822 main.go:141] libmachine: Decoding PEM data...
	I0520 08:15:38.085341    2822 main.go:141] libmachine: Parsing certificate...
	I0520 08:15:38.085416    2822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:15:38.085447    2822 main.go:141] libmachine: Decoding PEM data...
	I0520 08:15:38.085462    2822 main.go:141] libmachine: Parsing certificate...
	I0520 08:15:38.085976    2822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:15:38.209367    2822 main.go:141] libmachine: Creating SSH key...
	I0520 08:15:38.304410    2822 main.go:141] libmachine: Creating Disk image...
	I0520 08:15:38.304417    2822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:15:38.304563    2822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:38.313009    2822 main.go:141] libmachine: STDOUT: 
	I0520 08:15:38.313028    2822 main.go:141] libmachine: STDERR: 
	I0520 08:15:38.313077    2822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2 +20000M
	I0520 08:15:38.320191    2822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:15:38.320204    2822 main.go:141] libmachine: STDERR: 
	I0520 08:15:38.320214    2822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:38.320220    2822 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:15:38.320262    2822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e4:f2:61:ab:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:15:38.321738    2822 main.go:141] libmachine: STDOUT: 
	I0520 08:15:38.321757    2822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:15:38.321768    2822 client.go:171] LocalClient.Create took 236.6195ms
	I0520 08:15:40.323923    2822 start.go:128] duration metric: createHost completed in 2.301431s
	I0520 08:15:40.324009    2822 start.go:83] releasing machines lock for "multinode-046000", held for 2.301940625s
	W0520 08:15:40.324562    2822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:15:40.333287    2822 out.go:177] 
	W0520 08:15:40.338432    2822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:15:40.338473    2822 out.go:239] * 
	* 
	W0520 08:15:40.341102    2822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:15:40.349187    2822 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-046000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (64.590708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (101.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (122.444084ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-046000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- rollout status deployment/busybox: exit status 1 (55.835208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.122583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.104542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.447917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.361458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.968709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.838958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.997791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.461041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0520 08:16:23.609887    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.057625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.79325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0520 08:16:57.625401    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.631789    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.643861    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.666073    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.708188    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.790325    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:57.952441    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:58.273580    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:16:58.914495    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:17:00.196889    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:17:02.759267    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:17:07.881557    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
E0520 08:17:18.123932    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.840375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.062292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.599167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.299875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (52.77575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (30.020209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (101.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-046000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.05025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (29.181333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-046000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-046000 -v 3 --alsologtostderr: exit status 89 (39.84475ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-046000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:21.962195    2920 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:21.962394    2920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:21.962397    2920 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:21.962399    2920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:21.962462    2920 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:21.962687    2920 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:21.962848    2920 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:21.967830    2920 out.go:177] * The control plane node must be running for this command
	I0520 08:17:21.971245    2920 out.go:177]   To start a cluster, run: "minikube start -p multinode-046000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-046000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.83775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-046000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-046000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-046000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.2\",\"ClusterName\":\"multinode-046000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (29.550833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status --output json --alsologtostderr: exit status 7 (28.883875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-046000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:22.144428    2930 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:22.144573    2930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.144576    2930 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:22.144578    2930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.144643    2930 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:22.144757    2930 out.go:303] Setting JSON to true
	I0520 08:17:22.144776    2930 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:22.144831    2930 notify.go:220] Checking for updates...
	I0520 08:17:22.144967    2930 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:22.144973    2930 status.go:255] checking status of multinode-046000 ...
	I0520 08:17:22.145160    2930 status.go:330] multinode-046000 host status = "Stopped" (err=<nil>)
	I0520 08:17:22.145164    2930 status.go:343] host is not running, skipping remaining checks
	I0520 08:17:22.145168    2930 status.go:257] multinode-046000 status: &{Name:multinode-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-046000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.648458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 node stop m03: exit status 85 (45.789125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-046000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status: exit status 7 (28.664417ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr: exit status 7 (28.735666ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:22.277108    2938 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:22.277248    2938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.277251    2938 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:22.277253    2938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.277319    2938 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:22.277428    2938 out.go:303] Setting JSON to false
	I0520 08:17:22.277440    2938 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:22.277500    2938 notify.go:220] Checking for updates...
	I0520 08:17:22.277628    2938 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:22.277634    2938 status.go:255] checking status of multinode-046000 ...
	I0520 08:17:22.277810    2938 status.go:330] multinode-046000 host status = "Stopped" (err=<nil>)
	I0520 08:17:22.277813    2938 status.go:343] host is not running, skipping remaining checks
	I0520 08:17:22.277815    2938 status.go:257] multinode-046000 status: &{Name:multinode-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr": multinode-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.453208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 node start m03 --alsologtostderr: exit status 85 (46.571417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:22.334977    2942 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:22.335207    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.335209    2942 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:22.335212    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.335294    2942 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:22.335524    2942 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:22.335700    2942 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:22.340377    2942 out.go:177] 
	W0520 08:17:22.344360    2942 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0520 08:17:22.344365    2942 out.go:239] * 
	* 
	W0520 08:17:22.346011    2942 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:17:22.349270    2942 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0520 08:17:22.334977    2942 out.go:296] Setting OutFile to fd 1 ...
I0520 08:17:22.335207    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:17:22.335209    2942 out.go:309] Setting ErrFile to fd 2...
I0520 08:17:22.335212    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:17:22.335294    2942 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:17:22.335524    2942 mustload.go:65] Loading cluster: multinode-046000
I0520 08:17:22.335700    2942 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:17:22.340377    2942 out.go:177] 
W0520 08:17:22.344360    2942 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0520 08:17:22.344365    2942 out.go:239] * 
* 
W0520 08:17:22.346011    2942 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 08:17:22.349270    2942 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-046000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status: exit status 7 (29.039166ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-046000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.391167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-046000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-046000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-046000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-046000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.178313125s)

                                                
                                                
-- stdout --
	* [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-046000 in cluster multinode-046000
	* Restarting existing qemu2 VM for "multinode-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:22.525432    2952 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:22.525531    2952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.525535    2952 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:22.525538    2952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:22.525615    2952 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:22.526546    2952 out.go:303] Setting JSON to false
	I0520 08:17:22.541527    2952 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1013,"bootTime":1684594829,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:17:22.541594    2952 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:17:22.546359    2952 out.go:177] * [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:17:22.553368    2952 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:17:22.553393    2952 notify.go:220] Checking for updates...
	I0520 08:17:22.561155    2952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:17:22.564275    2952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:17:22.568349    2952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:17:22.569782    2952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:17:22.573311    2952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:17:22.576547    2952 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:22.576570    2952 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:17:22.581079    2952 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:17:22.588325    2952 start.go:295] selected driver: qemu2
	I0520 08:17:22.588333    2952 start.go:870] validating driver "qemu2" against &{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:17:22.588398    2952 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:17:22.590316    2952 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:17:22.590336    2952 cni.go:84] Creating CNI manager for ""
	I0520 08:17:22.590340    2952 cni.go:136] 1 nodes found, recommending kindnet
	I0520 08:17:22.590346    2952 start_flags.go:319] config:
	{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:17:22.590406    2952 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:22.598272    2952 out.go:177] * Starting control plane node multinode-046000 in cluster multinode-046000
	I0520 08:17:22.602295    2952 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:17:22.602320    2952 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:17:22.602334    2952 cache.go:57] Caching tarball of preloaded images
	I0520 08:17:22.602385    2952 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:17:22.602390    2952 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:17:22.602448    2952 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/multinode-046000/config.json ...
	I0520 08:17:22.602788    2952 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:17:22.602797    2952 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:17:22.602826    2952 start.go:368] acquired machines lock for "multinode-046000" in 23.875µs
	I0520 08:17:22.602837    2952 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:17:22.602841    2952 fix.go:55] fixHost starting: 
	I0520 08:17:22.602961    2952 fix.go:103] recreateIfNeeded on multinode-046000: state=Stopped err=<nil>
	W0520 08:17:22.602970    2952 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:17:22.610286    2952 out.go:177] * Restarting existing qemu2 VM for "multinode-046000" ...
	I0520 08:17:22.614245    2952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e4:f2:61:ab:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:17:22.616087    2952 main.go:141] libmachine: STDOUT: 
	I0520 08:17:22.616107    2952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:17:22.616138    2952 fix.go:57] fixHost completed within 13.296791ms
	I0520 08:17:22.616143    2952 start.go:83] releasing machines lock for "multinode-046000", held for 13.3125ms
	W0520 08:17:22.616150    2952 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:17:22.616203    2952 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:22.616208    2952 start.go:702] Will try again in 5 seconds ...
	I0520 08:17:27.618341    2952 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:17:27.618748    2952 start.go:368] acquired machines lock for "multinode-046000" in 281.125µs
	I0520 08:17:27.618909    2952 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:17:27.618933    2952 fix.go:55] fixHost starting: 
	I0520 08:17:27.619666    2952 fix.go:103] recreateIfNeeded on multinode-046000: state=Stopped err=<nil>
	W0520 08:17:27.619696    2952 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:17:27.628167    2952 out.go:177] * Restarting existing qemu2 VM for "multinode-046000" ...
	I0520 08:17:27.631442    2952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e4:f2:61:ab:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:17:27.640314    2952 main.go:141] libmachine: STDOUT: 
	I0520 08:17:27.640369    2952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:17:27.640453    2952 fix.go:57] fixHost completed within 21.524209ms
	I0520 08:17:27.640478    2952 start.go:83] releasing machines lock for "multinode-046000", held for 21.68875ms
	W0520 08:17:27.640788    2952 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:27.649295    2952 out.go:177] 
	W0520 08:17:27.653435    2952 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:17:27.653458    2952 out.go:239] * 
	* 
	W0520 08:17:27.655908    2952 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:17:27.664065    2952 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-046000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-046000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (33.273333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 node delete m03: exit status 89 (38.947917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-046000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-046000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr: exit status 7 (28.516834ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:27.845401    2965 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:27.845528    2965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:27.845531    2965 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:27.845534    2965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:27.845609    2965 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:27.845719    2965 out.go:303] Setting JSON to false
	I0520 08:17:27.845731    2965 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:27.845783    2965 notify.go:220] Checking for updates...
	I0520 08:17:27.845907    2965 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:27.845912    2965 status.go:255] checking status of multinode-046000 ...
	I0520 08:17:27.846085    2965 status.go:330] multinode-046000 host status = "Stopped" (err=<nil>)
	I0520 08:17:27.846089    2965 status.go:343] host is not running, skipping remaining checks
	I0520 08:17:27.846092    2965 status.go:257] multinode-046000 status: &{Name:multinode-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.264792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status: exit status 7 (29.507125ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr: exit status 7 (28.58675ms)

                                                
                                                
-- stdout --
	multinode-046000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:27.992091    2973 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:27.992222    2973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:27.992225    2973 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:27.992228    2973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:27.992299    2973 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:27.992405    2973 out.go:303] Setting JSON to false
	I0520 08:17:27.992416    2973 mustload.go:65] Loading cluster: multinode-046000
	I0520 08:17:27.992470    2973 notify.go:220] Checking for updates...
	I0520 08:17:27.992594    2973 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:27.992601    2973 status.go:255] checking status of multinode-046000 ...
	I0520 08:17:27.992773    2973 status.go:330] multinode-046000 host status = "Stopped" (err=<nil>)
	I0520 08:17:27.992777    2973 status.go:343] host is not running, skipping remaining checks
	I0520 08:17:27.992779    2973 status.go:257] multinode-046000 status: &{Name:multinode-046000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr": multinode-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-046000 status --alsologtostderr": multinode-046000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (28.540917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-046000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-046000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183346709s)

                                                
                                                
-- stdout --
	* [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-046000 in cluster multinode-046000
	* Restarting existing qemu2 VM for "multinode-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-046000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:28.048688    2977 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:28.048786    2977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:28.048790    2977 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:28.048793    2977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:28.049126    2977 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:28.050446    2977 out.go:303] Setting JSON to false
	I0520 08:17:28.065765    2977 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1019,"bootTime":1684594829,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:17:28.065825    2977 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:17:28.070546    2977 out.go:177] * [multinode-046000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:17:28.077496    2977 notify.go:220] Checking for updates...
	I0520 08:17:28.081498    2977 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:17:28.085491    2977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:17:28.089482    2977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:17:28.093526    2977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:17:28.094874    2977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:17:28.097524    2977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:17:28.100836    2977 config.go:182] Loaded profile config "multinode-046000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:28.101071    2977 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:17:28.105395    2977 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:17:28.112501    2977 start.go:295] selected driver: qemu2
	I0520 08:17:28.112509    2977 start.go:870] validating driver "qemu2" against &{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:17:28.112569    2977 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:17:28.114448    2977 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:17:28.114470    2977 cni.go:84] Creating CNI manager for ""
	I0520 08:17:28.114474    2977 cni.go:136] 1 nodes found, recommending kindnet
	I0520 08:17:28.114479    2977 start_flags.go:319] config:
	{Name:multinode-046000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-046000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:17:28.114553    2977 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:28.126492    2977 out.go:177] * Starting control plane node multinode-046000 in cluster multinode-046000
	I0520 08:17:28.130570    2977 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:17:28.130593    2977 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:17:28.130607    2977 cache.go:57] Caching tarball of preloaded images
	I0520 08:17:28.130661    2977 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:17:28.130668    2977 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:17:28.130755    2977 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/multinode-046000/config.json ...
	I0520 08:17:28.131117    2977 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:17:28.131127    2977 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:17:28.131154    2977 start.go:368] acquired machines lock for "multinode-046000" in 21.416µs
	I0520 08:17:28.131165    2977 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:17:28.131169    2977 fix.go:55] fixHost starting: 
	I0520 08:17:28.131289    2977 fix.go:103] recreateIfNeeded on multinode-046000: state=Stopped err=<nil>
	W0520 08:17:28.131297    2977 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:17:28.139484    2977 out.go:177] * Restarting existing qemu2 VM for "multinode-046000" ...
	I0520 08:17:28.143506    2977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e4:f2:61:ab:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:17:28.145489    2977 main.go:141] libmachine: STDOUT: 
	I0520 08:17:28.145508    2977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:17:28.145537    2977 fix.go:57] fixHost completed within 14.366375ms
	I0520 08:17:28.145543    2977 start.go:83] releasing machines lock for "multinode-046000", held for 14.384541ms
	W0520 08:17:28.145551    2977 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:17:28.145622    2977 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:28.145628    2977 start.go:702] Will try again in 5 seconds ...
	I0520 08:17:33.147766    2977 start.go:364] acquiring machines lock for multinode-046000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:17:33.148238    2977 start.go:368] acquired machines lock for "multinode-046000" in 384.75µs
	I0520 08:17:33.148381    2977 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:17:33.148399    2977 fix.go:55] fixHost starting: 
	I0520 08:17:33.149131    2977 fix.go:103] recreateIfNeeded on multinode-046000: state=Stopped err=<nil>
	W0520 08:17:33.149157    2977 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:17:33.153487    2977 out.go:177] * Restarting existing qemu2 VM for "multinode-046000" ...
	I0520 08:17:33.160859    2977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e4:f2:61:ab:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/multinode-046000/disk.qcow2
	I0520 08:17:33.170074    2977 main.go:141] libmachine: STDOUT: 
	I0520 08:17:33.170137    2977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:17:33.170235    2977 fix.go:57] fixHost completed within 21.836666ms
	I0520 08:17:33.170258    2977 start.go:83] releasing machines lock for "multinode-046000", held for 21.995792ms
	W0520 08:17:33.170592    2977 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-046000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:33.179680    2977 out.go:177] 
	W0520 08:17:33.183711    2977 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:17:33.183750    2977 out.go:239] * 
	* 
	W0520 08:17:33.186154    2977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:17:33.194682    2977 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-046000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (70.459083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-046000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-046000-m01 --driver=qemu2 
E0520 08:17:38.606428    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-046000-m01 --driver=qemu2 : exit status 80 (9.734283625s)

                                                
                                                
-- stdout --
	* [multinode-046000-m01] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-046000-m01 in cluster multinode-046000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-046000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-046000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-046000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-046000-m02 --driver=qemu2 : exit status 80 (9.934037542s)

                                                
                                                
-- stdout --
	* [multinode-046000-m02] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-046000-m02 in cluster multinode-046000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-046000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-046000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-046000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-046000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-046000: exit status 89 (80.353916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-046000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-046000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-046000 -n multinode-046000: exit status 7 (29.462167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-046000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.92s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-159000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-159000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.841145542s)

                                                
                                                
-- stdout --
	* [test-preload-159000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-159000 in cluster test-preload-159000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:17:53.354053    3031 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:17:53.354184    3031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:53.354187    3031 out.go:309] Setting ErrFile to fd 2...
	I0520 08:17:53.354190    3031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:17:53.354258    3031 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:17:53.355321    3031 out.go:303] Setting JSON to false
	I0520 08:17:53.370504    3031 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1044,"bootTime":1684594829,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:17:53.370574    3031 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:17:53.375160    3031 out.go:177] * [test-preload-159000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:17:53.383260    3031 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:17:53.383281    3031 notify.go:220] Checking for updates...
	I0520 08:17:53.394258    3031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:17:53.397322    3031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:17:53.400243    3031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:17:53.403240    3031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:17:53.406346    3031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:17:53.409452    3031 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:17:53.409477    3031 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:17:53.413265    3031 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:17:53.420162    3031 start.go:295] selected driver: qemu2
	I0520 08:17:53.420167    3031 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:17:53.420173    3031 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:17:53.422049    3031 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:17:53.425244    3031 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:17:53.428346    3031 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:17:53.428367    3031 cni.go:84] Creating CNI manager for ""
	I0520 08:17:53.428377    3031 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:17:53.428398    3031 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:17:53.428404    3031 start_flags.go:319] config:
	{Name:test-preload-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:17:53.428485    3031 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.436185    3031 out.go:177] * Starting control plane node test-preload-159000 in cluster test-preload-159000
	I0520 08:17:53.440217    3031 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 08:17:53.440303    3031 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/test-preload-159000/config.json ...
	I0520 08:17:53.440317    3031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/test-preload-159000/config.json: {Name:mkf2cbaac18334a5326eecbe5450994aa47c34c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:17:53.440362    3031 cache.go:107] acquiring lock: {Name:mk027ea34a428aeb94a39d4c2ef931f6bfff1a65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440359    3031 cache.go:107] acquiring lock: {Name:mk16b62ad5d32a020aeda5397f7794760ecffb2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440375    3031 cache.go:107] acquiring lock: {Name:mkfd3f4579726342cccf032fd4ac85e31d1e2641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440360    3031 cache.go:107] acquiring lock: {Name:mk979165501f8d0b460b31346e420b8bb36ea216 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440551    3031 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:17:53.440549    3031 cache.go:107] acquiring lock: {Name:mk3f3c08cb5ba79e109638f4dc92bd616c126477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440567    3031 start.go:364] acquiring machines lock for test-preload-159000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:17:53.440570    3031 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:17:53.440606    3031 cache.go:107] acquiring lock: {Name:mk75229732f1df22c2040fea73cbdea7b3d5891f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440619    3031 cache.go:107] acquiring lock: {Name:mk4ae9c4c22bd2dc58a3a64cb0d121f0b4b0f4dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440641    3031 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 08:17:53.440638    3031 start.go:368] acquired machines lock for "test-preload-159000" in 59.542µs
	I0520 08:17:53.440700    3031 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 08:17:53.440685    3031 cache.go:107] acquiring lock: {Name:mk5fe58ed850d7ab27a9e95360f7c6049bbee3cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:17:53.440748    3031 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 08:17:53.440754    3031 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 08:17:53.440775    3031 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 08:17:53.440811    3031 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 08:17:53.440673    3031 start.go:93] Provisioning new machine with config: &{Name:test-preload-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:17:53.440912    3031 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:17:53.440932    3031 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 08:17:53.448180    3031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:17:53.461885    3031 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 08:17:53.464371    3031 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 08:17:53.464541    3031 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 08:17:53.465620    3031 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 08:17:53.466078    3031 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 08:17:53.466349    3031 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 08:17:53.468530    3031 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 08:17:53.468684    3031 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 08:17:53.469190    3031 start.go:159] libmachine.API.Create for "test-preload-159000" (driver="qemu2")
	I0520 08:17:53.469201    3031 client.go:168] LocalClient.Create starting
	I0520 08:17:53.469261    3031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:17:53.469283    3031 main.go:141] libmachine: Decoding PEM data...
	I0520 08:17:53.469294    3031 main.go:141] libmachine: Parsing certificate...
	I0520 08:17:53.469345    3031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:17:53.469360    3031 main.go:141] libmachine: Decoding PEM data...
	I0520 08:17:53.469366    3031 main.go:141] libmachine: Parsing certificate...
	I0520 08:17:53.469658    3031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:17:53.649844    3031 main.go:141] libmachine: Creating SSH key...
	I0520 08:17:53.829185    3031 main.go:141] libmachine: Creating Disk image...
	I0520 08:17:53.829203    3031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:17:53.829412    3031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:17:53.837982    3031 main.go:141] libmachine: STDOUT: 
	I0520 08:17:53.838001    3031 main.go:141] libmachine: STDERR: 
	I0520 08:17:53.838070    3031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2 +20000M
	I0520 08:17:53.845517    3031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:17:53.845527    3031 main.go:141] libmachine: STDERR: 
	I0520 08:17:53.845541    3031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:17:53.845546    3031 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:17:53.845585    3031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c2:bf:e4:94:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:17:53.847133    3031 main.go:141] libmachine: STDOUT: 
	I0520 08:17:53.847154    3031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:17:53.847172    3031 client.go:171] LocalClient.Create took 377.965417ms
	W0520 08:17:54.710598    3031 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 08:17:54.710629    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0520 08:17:54.953376    3031 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 08:17:54.953406    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 08:17:54.972635    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0520 08:17:55.036960    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 08:17:55.199161    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 08:17:55.262884    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 08:17:55.414168    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 08:17:55.504236    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 08:17:55.504257    3031 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.063902917s
	I0520 08:17:55.504283    3031 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 08:17:55.544543    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0520 08:17:55.544587    3031 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.104214333s
	I0520 08:17:55.544605    3031 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0520 08:17:55.637427    3031 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 08:17:55.847372    3031 start.go:128] duration metric: createHost completed in 2.406443833s
	I0520 08:17:55.847410    3031 start.go:83] releasing machines lock for "test-preload-159000", held for 2.406758667s
	W0520 08:17:55.847480    3031 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:55.859695    3031 out.go:177] * Deleting "test-preload-159000" in qemu2 ...
	W0520 08:17:55.880656    3031 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:17:55.880692    3031 start.go:702] Will try again in 5 seconds ...
	I0520 08:17:56.629226    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0520 08:17:56.629278    3031 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.188782667s
	I0520 08:17:56.629307    3031 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0520 08:17:57.968766    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0520 08:17:57.968816    3031 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.528202209s
	I0520 08:17:57.968843    3031 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0520 08:17:58.653083    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0520 08:17:58.653132    3031 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.212516958s
	I0520 08:17:58.653160    3031 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0520 08:17:58.856733    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0520 08:17:58.856784    3031 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.416436583s
	I0520 08:17:58.856815    3031 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0520 08:17:59.563019    3031 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0520 08:17:59.563059    3031 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.122723166s
	I0520 08:17:59.563120    3031 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0520 08:18:00.880945    3031 start.go:364] acquiring machines lock for test-preload-159000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:18:00.881448    3031 start.go:368] acquired machines lock for "test-preload-159000" in 429.917µs
	I0520 08:18:00.881552    3031 start.go:93] Provisioning new machine with config: &{Name:test-preload-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:18:00.881783    3031 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:18:00.890598    3031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:18:00.937307    3031 start.go:159] libmachine.API.Create for "test-preload-159000" (driver="qemu2")
	I0520 08:18:00.937348    3031 client.go:168] LocalClient.Create starting
	I0520 08:18:00.937477    3031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:18:00.937524    3031 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:00.937551    3031 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:00.937643    3031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:18:00.937688    3031 main.go:141] libmachine: Decoding PEM data...
	I0520 08:18:00.937704    3031 main.go:141] libmachine: Parsing certificate...
	I0520 08:18:00.938212    3031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:18:01.076067    3031 main.go:141] libmachine: Creating SSH key...
	I0520 08:18:01.107932    3031 main.go:141] libmachine: Creating Disk image...
	I0520 08:18:01.107938    3031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:18:01.108071    3031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:18:01.116590    3031 main.go:141] libmachine: STDOUT: 
	I0520 08:18:01.116604    3031 main.go:141] libmachine: STDERR: 
	I0520 08:18:01.116674    3031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2 +20000M
	I0520 08:18:01.123868    3031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:18:01.123879    3031 main.go:141] libmachine: STDERR: 
	I0520 08:18:01.123894    3031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:18:01.123900    3031 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:18:01.123954    3031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:1a:53:b8:95:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/test-preload-159000/disk.qcow2
	I0520 08:18:01.125449    3031 main.go:141] libmachine: STDOUT: 
	I0520 08:18:01.125472    3031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:18:01.125484    3031 client.go:171] LocalClient.Create took 188.130583ms
	I0520 08:18:03.127537    3031 start.go:128] duration metric: createHost completed in 2.245722292s
	I0520 08:18:03.127585    3031 start.go:83] releasing machines lock for "test-preload-159000", held for 2.24611575s
	W0520 08:18:03.128065    3031 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:18:03.137427    3031 out.go:177] 
	W0520 08:18:03.142775    3031 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:18:03.142847    3031 out.go:239] * 
	* 
	W0520 08:18:03.145151    3031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:18:03.153591    3031 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-159000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-05-20 08:18:03.17157 -0700 PDT m=+860.160330543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-159000 -n test-preload-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-159000 -n test-preload-159000: exit status 7 (66.958125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-159000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-159000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (9.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-942000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-942000 --memory=2048 --driver=qemu2 : exit status 80 (9.773191875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-942000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-942000 in cluster scheduled-stop-942000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-942000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-942000 in cluster scheduled-stop-942000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-942000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-942000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-05-20 08:18:13.112983 -0700 PDT m=+870.101760501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-942000 -n scheduled-stop-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-942000 -n scheduled-stop-942000: exit status 7 (67.142042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-942000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-942000
--- FAIL: TestScheduledStopUnix (9.95s)

                                                
                                    
x
+
TestSkaffold (18.15s)

                                                
                                                
=== RUN   TestSkaffold
E0520 08:18:19.567162    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1050305787 version
skaffold_test.go:63: skaffold version: v2.4.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-064000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-064000 --memory=2600 --driver=qemu2 : exit status 80 (9.773068291s)

                                                
                                                
-- stdout --
	* [skaffold-064000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-064000 in cluster skaffold-064000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-064000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-064000 in cluster skaffold-064000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-05-20 08:18:31.272321 -0700 PDT m=+888.261129918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-064000 -n skaffold-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-064000 -n skaffold-064000: exit status 7 (62.1795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-064000
--- FAIL: TestSkaffold (18.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (139.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0520 08:19:41.489621    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-20 08:21:31.152457 -0700 PDT m=+1068.139844084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-824000 -n running-upgrade-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-824000 -n running-upgrade-824000: exit status 85 (82.368ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-824000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-824000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-824000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-824000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-824000\"")
helpers_test.go:175: Cleaning up "running-upgrade-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-824000
--- FAIL: TestRunningBinaryUpgrade (139.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.723957583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-022000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-022000 in cluster kubernetes-upgrade-022000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:21:31.537059    3515 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:21:31.537181    3515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:21:31.537184    3515 out.go:309] Setting ErrFile to fd 2...
	I0520 08:21:31.537186    3515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:21:31.537251    3515 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:21:31.538263    3515 out.go:303] Setting JSON to false
	I0520 08:21:31.553151    3515 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1262,"bootTime":1684594829,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:21:31.553210    3515 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:21:31.558436    3515 out.go:177] * [kubernetes-upgrade-022000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:21:31.565444    3515 notify.go:220] Checking for updates...
	I0520 08:21:31.567114    3515 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:21:31.571434    3515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:21:31.574408    3515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:21:31.575878    3515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:21:31.579390    3515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:21:31.582396    3515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:21:31.585685    3515 config.go:182] Loaded profile config "cert-expiration-950000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:21:31.585748    3515 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:21:31.585766    3515 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:21:31.589298    3515 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:21:31.596343    3515 start.go:295] selected driver: qemu2
	I0520 08:21:31.596351    3515 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:21:31.596358    3515 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:21:31.598160    3515 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:21:31.601326    3515 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:21:31.605449    3515 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:21:31.605466    3515 cni.go:84] Creating CNI manager for ""
	I0520 08:21:31.605475    3515 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:21:31.605479    3515 start_flags.go:319] config:
	{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:21:31.605558    3515 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:21:31.614304    3515 out.go:177] * Starting control plane node kubernetes-upgrade-022000 in cluster kubernetes-upgrade-022000
	I0520 08:21:31.618303    3515 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:21:31.618326    3515 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:21:31.618341    3515 cache.go:57] Caching tarball of preloaded images
	I0520 08:21:31.618398    3515 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:21:31.618403    3515 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0520 08:21:31.618461    3515 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kubernetes-upgrade-022000/config.json ...
	I0520 08:21:31.618472    3515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kubernetes-upgrade-022000/config.json: {Name:mk6d61b8f98f20db6981a34166c3de4b1a95f190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:21:31.618665    3515 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:21:31.618675    3515 start.go:364] acquiring machines lock for kubernetes-upgrade-022000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:21:31.618703    3515 start.go:368] acquired machines lock for "kubernetes-upgrade-022000" in 22.667µs
	I0520 08:21:31.618718    3515 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:21:31.618742    3515 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:21:31.627335    3515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:21:31.643557    3515 start.go:159] libmachine.API.Create for "kubernetes-upgrade-022000" (driver="qemu2")
	I0520 08:21:31.643586    3515 client.go:168] LocalClient.Create starting
	I0520 08:21:31.643675    3515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:21:31.643708    3515 main.go:141] libmachine: Decoding PEM data...
	I0520 08:21:31.643719    3515 main.go:141] libmachine: Parsing certificate...
	I0520 08:21:31.643768    3515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:21:31.643783    3515 main.go:141] libmachine: Decoding PEM data...
	I0520 08:21:31.643792    3515 main.go:141] libmachine: Parsing certificate...
	I0520 08:21:31.644163    3515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:21:31.761996    3515 main.go:141] libmachine: Creating SSH key...
	I0520 08:21:31.812293    3515 main.go:141] libmachine: Creating Disk image...
	I0520 08:21:31.812301    3515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:21:31.812450    3515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:31.820555    3515 main.go:141] libmachine: STDOUT: 
	I0520 08:21:31.820569    3515 main.go:141] libmachine: STDERR: 
	I0520 08:21:31.820617    3515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2 +20000M
	I0520 08:21:31.827787    3515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:21:31.827808    3515 main.go:141] libmachine: STDERR: 
	I0520 08:21:31.827825    3515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:31.827836    3515 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:21:31.827870    3515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:43:a1:81:15:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:31.829558    3515 main.go:141] libmachine: STDOUT: 
	I0520 08:21:31.829572    3515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:21:31.829593    3515 client.go:171] LocalClient.Create took 185.998208ms
	I0520 08:21:33.831791    3515 start.go:128] duration metric: createHost completed in 2.21302075s
	I0520 08:21:33.831917    3515 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 2.213206458s
	W0520 08:21:33.832050    3515 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:21:33.844379    3515 out.go:177] * Deleting "kubernetes-upgrade-022000" in qemu2 ...
	W0520 08:21:33.863420    3515 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:21:33.863448    3515 start.go:702] Will try again in 5 seconds ...
	I0520 08:21:38.865721    3515 start.go:364] acquiring machines lock for kubernetes-upgrade-022000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:21:38.866358    3515 start.go:368] acquired machines lock for "kubernetes-upgrade-022000" in 512.916µs
	I0520 08:21:38.866466    3515 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:21:38.866757    3515 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:21:38.876464    3515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:21:38.925285    3515 start.go:159] libmachine.API.Create for "kubernetes-upgrade-022000" (driver="qemu2")
	I0520 08:21:38.925323    3515 client.go:168] LocalClient.Create starting
	I0520 08:21:38.925470    3515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:21:38.925524    3515 main.go:141] libmachine: Decoding PEM data...
	I0520 08:21:38.925549    3515 main.go:141] libmachine: Parsing certificate...
	I0520 08:21:38.925638    3515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:21:38.925670    3515 main.go:141] libmachine: Decoding PEM data...
	I0520 08:21:38.925689    3515 main.go:141] libmachine: Parsing certificate...
	I0520 08:21:38.926219    3515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:21:39.047142    3515 main.go:141] libmachine: Creating SSH key...
	I0520 08:21:39.174359    3515 main.go:141] libmachine: Creating Disk image...
	I0520 08:21:39.174366    3515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:21:39.174508    3515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:39.183345    3515 main.go:141] libmachine: STDOUT: 
	I0520 08:21:39.183365    3515 main.go:141] libmachine: STDERR: 
	I0520 08:21:39.183460    3515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2 +20000M
	I0520 08:21:39.190802    3515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:21:39.190816    3515 main.go:141] libmachine: STDERR: 
	I0520 08:21:39.190836    3515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:39.190842    3515 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:21:39.190876    3515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f2:d8:df:00:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:39.192449    3515 main.go:141] libmachine: STDOUT: 
	I0520 08:21:39.192465    3515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:21:39.192477    3515 client.go:171] LocalClient.Create took 267.150167ms
	I0520 08:21:41.194640    3515 start.go:128] duration metric: createHost completed in 2.327862583s
	I0520 08:21:41.194707    3515 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 2.328328292s
	W0520 08:21:41.195403    3515 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:21:41.205889    3515 out.go:177] 
	W0520 08:21:41.209046    3515 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:21:41.209082    3515 out.go:239] * 
	* 
	W0520 08:21:41.211878    3515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:21:41.221779    3515 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-022000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-022000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-022000 status --format={{.Host}}: exit status 7 (34.457958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.178767667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-022000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-022000 in cluster kubernetes-upgrade-022000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:21:41.399870    3534 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:21:41.399981    3534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:21:41.399984    3534 out.go:309] Setting ErrFile to fd 2...
	I0520 08:21:41.399987    3534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:21:41.400060    3534 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:21:41.401020    3534 out.go:303] Setting JSON to false
	I0520 08:21:41.416005    3534 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1272,"bootTime":1684594829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:21:41.416082    3534 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:21:41.425451    3534 out.go:177] * [kubernetes-upgrade-022000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:21:41.429479    3534 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:21:41.429542    3534 notify.go:220] Checking for updates...
	I0520 08:21:41.435410    3534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:21:41.438471    3534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:21:41.441440    3534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:21:41.444475    3534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:21:41.447384    3534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:21:41.450699    3534 config.go:182] Loaded profile config "kubernetes-upgrade-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0520 08:21:41.450917    3534 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:21:41.455447    3534 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:21:41.462369    3534 start.go:295] selected driver: qemu2
	I0520 08:21:41.462375    3534 start.go:870] validating driver "qemu2" against &{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:21:41.462437    3534 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:21:41.464319    3534 cni.go:84] Creating CNI manager for ""
	I0520 08:21:41.464336    3534 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:21:41.464342    3534 start_flags.go:319] config:
	{Name:kubernetes-upgrade-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-022000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:21:41.464419    3534 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:21:41.472463    3534 out.go:177] * Starting control plane node kubernetes-upgrade-022000 in cluster kubernetes-upgrade-022000
	I0520 08:21:41.476385    3534 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:21:41.476417    3534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:21:41.476432    3534 cache.go:57] Caching tarball of preloaded images
	I0520 08:21:41.476492    3534 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:21:41.476506    3534 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:21:41.476589    3534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kubernetes-upgrade-022000/config.json ...
	I0520 08:21:41.476949    3534 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:21:41.476960    3534 start.go:364] acquiring machines lock for kubernetes-upgrade-022000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:21:41.476986    3534 start.go:368] acquired machines lock for "kubernetes-upgrade-022000" in 20.75µs
	I0520 08:21:41.476997    3534 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:21:41.477001    3534 fix.go:55] fixHost starting: 
	I0520 08:21:41.477115    3534 fix.go:103] recreateIfNeeded on kubernetes-upgrade-022000: state=Stopped err=<nil>
	W0520 08:21:41.477123    3534 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:21:41.485426    3534 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-022000" ...
	I0520 08:21:41.489350    3534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f2:d8:df:00:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:41.491282    3534 main.go:141] libmachine: STDOUT: 
	I0520 08:21:41.491302    3534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:21:41.491329    3534 fix.go:57] fixHost completed within 14.327084ms
	I0520 08:21:41.491334    3534 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 14.344542ms
	W0520 08:21:41.491342    3534 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:21:41.491403    3534 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:21:41.491408    3534 start.go:702] Will try again in 5 seconds ...
	I0520 08:21:46.493657    3534 start.go:364] acquiring machines lock for kubernetes-upgrade-022000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:21:46.494116    3534 start.go:368] acquired machines lock for "kubernetes-upgrade-022000" in 371.458µs
	I0520 08:21:46.494267    3534 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:21:46.494285    3534 fix.go:55] fixHost starting: 
	I0520 08:21:46.495042    3534 fix.go:103] recreateIfNeeded on kubernetes-upgrade-022000: state=Stopped err=<nil>
	W0520 08:21:46.495069    3534 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:21:46.500646    3534 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-022000" ...
	I0520 08:21:46.508612    3534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f2:d8:df:00:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubernetes-upgrade-022000/disk.qcow2
	I0520 08:21:46.518098    3534 main.go:141] libmachine: STDOUT: 
	I0520 08:21:46.518151    3534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:21:46.518244    3534 fix.go:57] fixHost completed within 23.956625ms
	I0520 08:21:46.518262    3534 start.go:83] releasing machines lock for "kubernetes-upgrade-022000", held for 24.12325ms
	W0520 08:21:46.518579    3534 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:21:46.526614    3534 out.go:177] 
	W0520 08:21:46.530681    3534 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:21:46.530713    3534 out.go:239] * 
	* 
	W0520 08:21:46.533302    3534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:21:46.539582    3534 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-022000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-022000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-022000 version --output=json: exit status 1 (63.4835ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-022000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-05-20 08:21:46.616962 -0700 PDT m=+1083.604368751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-022000 -n kubernetes-upgrade-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-022000 -n kubernetes-upgrade-022000: exit status 7 (33.029333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-022000
--- FAIL: TestKubernetesUpgrade (15.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16543
- KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3883315218/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16543
- KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2254294380/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (133.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0520 08:21:57.626404    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (133.59s)

                                                
                                    
x
+
TestPause/serial/Start (9.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.675200875s)

                                                
                                                
-- stdout --
	* [pause-350000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-350000 in cluster pause-350000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-350000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-350000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-350000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-350000 -n pause-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-350000 -n pause-350000: exit status 7 (67.899708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 
E0520 08:22:25.333971    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/ingress-addon-legacy-371000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 : exit status 80 (9.719635542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-088000 in cluster NoKubernetes-088000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (67.173208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 : exit status 80 (5.393665167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (69.39525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 : exit status 80 (5.392884917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (69.439459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 : exit status 80 (5.39982575s)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-088000
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-088000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-088000 -n NoKubernetes-088000: exit status 7 (68.937875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.881848916s)

                                                
                                                
-- stdout --
	* [auto-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-021000 in cluster auto-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:22:48.310863    3652 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:22:48.311007    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:22:48.311010    3652 out.go:309] Setting ErrFile to fd 2...
	I0520 08:22:48.311013    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:22:48.311093    3652 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:22:48.312091    3652 out.go:303] Setting JSON to false
	I0520 08:22:48.327122    3652 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1339,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:22:48.327194    3652 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:22:48.335616    3652 out.go:177] * [auto-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:22:48.339713    3652 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:22:48.339802    3652 notify.go:220] Checking for updates...
	I0520 08:22:48.346621    3652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:22:48.350685    3652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:22:48.354544    3652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:22:48.357752    3652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:22:48.360691    3652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:22:48.363998    3652 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:22:48.364018    3652 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:22:48.367612    3652 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:22:48.374668    3652 start.go:295] selected driver: qemu2
	I0520 08:22:48.374673    3652 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:22:48.374678    3652 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:22:48.376596    3652 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:22:48.379589    3652 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:22:48.383748    3652 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:22:48.383768    3652 cni.go:84] Creating CNI manager for ""
	I0520 08:22:48.383778    3652 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:22:48.383782    3652 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:22:48.383789    3652 start_flags.go:319] config:
	{Name:auto-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:auto-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:22:48.383877    3652 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:22:48.392643    3652 out.go:177] * Starting control plane node auto-021000 in cluster auto-021000
	I0520 08:22:48.396690    3652 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:22:48.396712    3652 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:22:48.396726    3652 cache.go:57] Caching tarball of preloaded images
	I0520 08:22:48.396785    3652 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:22:48.396792    3652 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:22:48.396851    3652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/auto-021000/config.json ...
	I0520 08:22:48.396863    3652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/auto-021000/config.json: {Name:mkc4fcf8a9b7e8a2c2fff6b77fd43ae6a525dda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:22:48.397061    3652 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:22:48.397072    3652 start.go:364] acquiring machines lock for auto-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:22:48.397102    3652 start.go:368] acquired machines lock for "auto-021000" in 25.208µs
	I0520 08:22:48.397120    3652 start.go:93] Provisioning new machine with config: &{Name:auto-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:22:48.397147    3652 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:22:48.405677    3652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:22:48.422563    3652 start.go:159] libmachine.API.Create for "auto-021000" (driver="qemu2")
	I0520 08:22:48.422594    3652 client.go:168] LocalClient.Create starting
	I0520 08:22:48.422657    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:22:48.422680    3652 main.go:141] libmachine: Decoding PEM data...
	I0520 08:22:48.422689    3652 main.go:141] libmachine: Parsing certificate...
	I0520 08:22:48.422738    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:22:48.422752    3652 main.go:141] libmachine: Decoding PEM data...
	I0520 08:22:48.422760    3652 main.go:141] libmachine: Parsing certificate...
	I0520 08:22:48.423090    3652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:22:48.530364    3652 main.go:141] libmachine: Creating SSH key...
	I0520 08:22:48.687288    3652 main.go:141] libmachine: Creating Disk image...
	I0520 08:22:48.687300    3652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:22:48.687480    3652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:48.696641    3652 main.go:141] libmachine: STDOUT: 
	I0520 08:22:48.696661    3652 main.go:141] libmachine: STDERR: 
	I0520 08:22:48.696718    3652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2 +20000M
	I0520 08:22:48.703936    3652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:22:48.703947    3652 main.go:141] libmachine: STDERR: 
	I0520 08:22:48.703967    3652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:48.703976    3652 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:22:48.704023    3652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:10:4f:18:b5:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:48.705560    3652 main.go:141] libmachine: STDOUT: 
	I0520 08:22:48.705571    3652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:22:48.705592    3652 client.go:171] LocalClient.Create took 282.994041ms
	I0520 08:22:50.707756    3652 start.go:128] duration metric: createHost completed in 2.310595208s
	I0520 08:22:50.707822    3652 start.go:83] releasing machines lock for "auto-021000", held for 2.310713792s
	W0520 08:22:50.707917    3652 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:22:50.714349    3652 out.go:177] * Deleting "auto-021000" in qemu2 ...
	W0520 08:22:50.738061    3652 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:22:50.738086    3652 start.go:702] Will try again in 5 seconds ...
	I0520 08:22:55.740351    3652 start.go:364] acquiring machines lock for auto-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:22:55.740812    3652 start.go:368] acquired machines lock for "auto-021000" in 359.959µs
	I0520 08:22:55.740915    3652 start.go:93] Provisioning new machine with config: &{Name:auto-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:22:55.741196    3652 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:22:55.749999    3652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:22:55.797903    3652 start.go:159] libmachine.API.Create for "auto-021000" (driver="qemu2")
	I0520 08:22:55.797958    3652 client.go:168] LocalClient.Create starting
	I0520 08:22:55.798075    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:22:55.798110    3652 main.go:141] libmachine: Decoding PEM data...
	I0520 08:22:55.798126    3652 main.go:141] libmachine: Parsing certificate...
	I0520 08:22:55.798208    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:22:55.798235    3652 main.go:141] libmachine: Decoding PEM data...
	I0520 08:22:55.798267    3652 main.go:141] libmachine: Parsing certificate...
	I0520 08:22:55.798799    3652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:22:55.947457    3652 main.go:141] libmachine: Creating SSH key...
	I0520 08:22:56.101438    3652 main.go:141] libmachine: Creating Disk image...
	I0520 08:22:56.101445    3652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:22:56.101625    3652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:56.110621    3652 main.go:141] libmachine: STDOUT: 
	I0520 08:22:56.110635    3652 main.go:141] libmachine: STDERR: 
	I0520 08:22:56.110700    3652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2 +20000M
	I0520 08:22:56.117903    3652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:22:56.117915    3652 main.go:141] libmachine: STDERR: 
	I0520 08:22:56.117927    3652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:56.117933    3652 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:22:56.117964    3652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:0f:b8:56:38:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/auto-021000/disk.qcow2
	I0520 08:22:56.119470    3652 main.go:141] libmachine: STDOUT: 
	I0520 08:22:56.119482    3652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:22:56.119496    3652 client.go:171] LocalClient.Create took 321.531791ms
	I0520 08:22:58.121646    3652 start.go:128] duration metric: createHost completed in 2.380425916s
	I0520 08:22:58.121719    3652 start.go:83] releasing machines lock for "auto-021000", held for 2.380880166s
	W0520 08:22:58.122373    3652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:22:58.134063    3652 out.go:177] 
	W0520 08:22:58.138202    3652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:22:58.138266    3652 out.go:239] * 
	* 
	W0520 08:22:58.141044    3652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:22:58.151003    3652 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.779699375s)

                                                
                                                
-- stdout --
	* [kindnet-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-021000 in cluster kindnet-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:23:00.289358    3766 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:23:00.289526    3766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:00.289528    3766 out.go:309] Setting ErrFile to fd 2...
	I0520 08:23:00.289531    3766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:00.289601    3766 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:23:00.290627    3766 out.go:303] Setting JSON to false
	I0520 08:23:00.305615    3766 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1351,"bootTime":1684594829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:23:00.305692    3766 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:23:00.314145    3766 out.go:177] * [kindnet-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:23:00.318198    3766 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:23:00.318230    3766 notify.go:220] Checking for updates...
	I0520 08:23:00.324078    3766 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:23:00.328184    3766 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:23:00.332037    3766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:23:00.335087    3766 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:23:00.338184    3766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:23:00.341423    3766 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:23:00.341441    3766 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:23:00.345136    3766 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:23:00.352112    3766 start.go:295] selected driver: qemu2
	I0520 08:23:00.352122    3766 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:23:00.352138    3766 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:23:00.354085    3766 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:23:00.358106    3766 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:23:00.361241    3766 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:23:00.361261    3766 cni.go:84] Creating CNI manager for "kindnet"
	I0520 08:23:00.361264    3766 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 08:23:00.361278    3766 start_flags.go:319] config:
	{Name:kindnet-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:23:00.361395    3766 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:23:00.370092    3766 out.go:177] * Starting control plane node kindnet-021000 in cluster kindnet-021000
	I0520 08:23:00.373049    3766 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:23:00.373069    3766 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:23:00.373079    3766 cache.go:57] Caching tarball of preloaded images
	I0520 08:23:00.373136    3766 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:23:00.373141    3766 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:23:00.373194    3766 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kindnet-021000/config.json ...
	I0520 08:23:00.373210    3766 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kindnet-021000/config.json: {Name:mkcc14f1db45858731e52ec0093efc9890a797b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:23:00.373401    3766 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:23:00.373411    3766 start.go:364] acquiring machines lock for kindnet-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:00.373441    3766 start.go:368] acquired machines lock for "kindnet-021000" in 24.292µs
	I0520 08:23:00.373456    3766 start.go:93] Provisioning new machine with config: &{Name:kindnet-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:00.373484    3766 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:00.382082    3766 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:00.398740    3766 start.go:159] libmachine.API.Create for "kindnet-021000" (driver="qemu2")
	I0520 08:23:00.398767    3766 client.go:168] LocalClient.Create starting
	I0520 08:23:00.398834    3766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:00.398855    3766 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:00.398869    3766 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:00.398928    3766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:00.398943    3766 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:00.398949    3766 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:00.399314    3766 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:00.538802    3766 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:00.598132    3766 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:00.598139    3766 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:00.598296    3766 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:00.606806    3766 main.go:141] libmachine: STDOUT: 
	I0520 08:23:00.606826    3766 main.go:141] libmachine: STDERR: 
	I0520 08:23:00.606882    3766 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2 +20000M
	I0520 08:23:00.614070    3766 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:00.614083    3766 main.go:141] libmachine: STDERR: 
	I0520 08:23:00.614101    3766 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:00.614108    3766 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:00.614141    3766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:3d:36:a8:7e:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:00.615654    3766 main.go:141] libmachine: STDOUT: 
	I0520 08:23:00.615665    3766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:00.615683    3766 client.go:171] LocalClient.Create took 216.911625ms
	I0520 08:23:02.617869    3766 start.go:128] duration metric: createHost completed in 2.24435375s
	I0520 08:23:02.617999    3766 start.go:83] releasing machines lock for "kindnet-021000", held for 2.244550792s
	W0520 08:23:02.618098    3766 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:02.628518    3766 out.go:177] * Deleting "kindnet-021000" in qemu2 ...
	W0520 08:23:02.649772    3766 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:02.649805    3766 start.go:702] Will try again in 5 seconds ...
	I0520 08:23:07.652091    3766 start.go:364] acquiring machines lock for kindnet-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:07.652735    3766 start.go:368] acquired machines lock for "kindnet-021000" in 509.291µs
	I0520 08:23:07.652857    3766 start.go:93] Provisioning new machine with config: &{Name:kindnet-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:07.653137    3766 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:07.663175    3766 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:07.712362    3766 start.go:159] libmachine.API.Create for "kindnet-021000" (driver="qemu2")
	I0520 08:23:07.712419    3766 client.go:168] LocalClient.Create starting
	I0520 08:23:07.712529    3766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:07.712586    3766 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:07.712611    3766 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:07.712687    3766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:07.712714    3766 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:07.712725    3766 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:07.713217    3766 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:07.843568    3766 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:07.984698    3766 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:07.984704    3766 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:07.984873    3766 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:07.994001    3766 main.go:141] libmachine: STDOUT: 
	I0520 08:23:07.994019    3766 main.go:141] libmachine: STDERR: 
	I0520 08:23:07.994078    3766 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2 +20000M
	I0520 08:23:08.001263    3766 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:08.001277    3766 main.go:141] libmachine: STDERR: 
	I0520 08:23:08.001290    3766 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:08.001298    3766 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:08.001336    3766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a0:fb:f5:ab:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kindnet-021000/disk.qcow2
	I0520 08:23:08.002877    3766 main.go:141] libmachine: STDOUT: 
	I0520 08:23:08.002894    3766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:08.002906    3766 client.go:171] LocalClient.Create took 290.482875ms
	I0520 08:23:10.005077    3766 start.go:128] duration metric: createHost completed in 2.351900042s
	I0520 08:23:10.005131    3766 start.go:83] releasing machines lock for "kindnet-021000", held for 2.35237475s
	W0520 08:23:10.005709    3766 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:10.013046    3766 out.go:177] 
	W0520 08:23:10.016520    3766 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:23:10.016546    3766 out.go:239] * 
	* 
	W0520 08:23:10.019062    3766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:23:10.028390    3766 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.813907583s)

                                                
                                                
-- stdout --
	* [calico-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-021000 in cluster calico-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:23:12.260881    3882 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:23:12.261015    3882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:12.261018    3882 out.go:309] Setting ErrFile to fd 2...
	I0520 08:23:12.261021    3882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:12.261098    3882 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:23:12.262171    3882 out.go:303] Setting JSON to false
	I0520 08:23:12.277592    3882 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1363,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:23:12.277668    3882 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:23:12.283131    3882 out.go:177] * [calico-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:23:12.290992    3882 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:23:12.291044    3882 notify.go:220] Checking for updates...
	I0520 08:23:12.294092    3882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:23:12.298052    3882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:23:12.306105    3882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:23:12.310084    3882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:23:12.311393    3882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:23:12.314303    3882 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:23:12.314326    3882 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:23:12.319011    3882 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:23:12.323980    3882 start.go:295] selected driver: qemu2
	I0520 08:23:12.323987    3882 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:23:12.323992    3882 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:23:12.325953    3882 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:23:12.329035    3882 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:23:12.332988    3882 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:23:12.333013    3882 cni.go:84] Creating CNI manager for "calico"
	I0520 08:23:12.333017    3882 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0520 08:23:12.333025    3882 start_flags.go:319] config:
	{Name:calico-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:calico-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:23:12.333105    3882 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:23:12.342047    3882 out.go:177] * Starting control plane node calico-021000 in cluster calico-021000
	I0520 08:23:12.346007    3882 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:23:12.346034    3882 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:23:12.346046    3882 cache.go:57] Caching tarball of preloaded images
	I0520 08:23:12.346106    3882 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:23:12.346112    3882 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:23:12.346180    3882 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/calico-021000/config.json ...
	I0520 08:23:12.346197    3882 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/calico-021000/config.json: {Name:mkfeb59e36b77bcc71ac4b77497b3b0863929907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:23:12.346398    3882 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:23:12.346409    3882 start.go:364] acquiring machines lock for calico-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:12.346441    3882 start.go:368] acquired machines lock for "calico-021000" in 25.917µs
	I0520 08:23:12.346458    3882 start.go:93] Provisioning new machine with config: &{Name:calico-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:12.346488    3882 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:12.355023    3882 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:12.371951    3882 start.go:159] libmachine.API.Create for "calico-021000" (driver="qemu2")
	I0520 08:23:12.371973    3882 client.go:168] LocalClient.Create starting
	I0520 08:23:12.372038    3882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:12.372061    3882 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:12.372075    3882 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:12.372110    3882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:12.372125    3882 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:12.372131    3882 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:12.372442    3882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:12.494837    3882 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:12.631185    3882 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:12.631193    3882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:12.631354    3882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:12.640076    3882 main.go:141] libmachine: STDOUT: 
	I0520 08:23:12.640100    3882 main.go:141] libmachine: STDERR: 
	I0520 08:23:12.640162    3882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2 +20000M
	I0520 08:23:12.647371    3882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:12.647383    3882 main.go:141] libmachine: STDERR: 
	I0520 08:23:12.647410    3882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:12.647417    3882 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:12.647462    3882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:8a:90:a1:ea:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:12.648921    3882 main.go:141] libmachine: STDOUT: 
	I0520 08:23:12.648931    3882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:12.648951    3882 client.go:171] LocalClient.Create took 276.969166ms
	I0520 08:23:14.651181    3882 start.go:128] duration metric: createHost completed in 2.304665667s
	I0520 08:23:14.651255    3882 start.go:83] releasing machines lock for "calico-021000", held for 2.30480925s
	W0520 08:23:14.651320    3882 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:14.661946    3882 out.go:177] * Deleting "calico-021000" in qemu2 ...
	W0520 08:23:14.683970    3882 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:14.684004    3882 start.go:702] Will try again in 5 seconds ...
	I0520 08:23:19.686298    3882 start.go:364] acquiring machines lock for calico-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:19.686983    3882 start.go:368] acquired machines lock for "calico-021000" in 550.792µs
	I0520 08:23:19.687095    3882 start.go:93] Provisioning new machine with config: &{Name:calico-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:19.687378    3882 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:19.697279    3882 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:19.743868    3882 start.go:159] libmachine.API.Create for "calico-021000" (driver="qemu2")
	I0520 08:23:19.743922    3882 client.go:168] LocalClient.Create starting
	I0520 08:23:19.744032    3882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:19.744068    3882 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:19.744093    3882 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:19.744182    3882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:19.744210    3882 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:19.744225    3882 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:19.744747    3882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:19.868410    3882 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:19.986999    3882 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:19.987004    3882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:19.987156    3882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:19.995755    3882 main.go:141] libmachine: STDOUT: 
	I0520 08:23:19.995775    3882 main.go:141] libmachine: STDERR: 
	I0520 08:23:19.995838    3882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2 +20000M
	I0520 08:23:20.003044    3882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:20.003057    3882 main.go:141] libmachine: STDERR: 
	I0520 08:23:20.003076    3882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:20.003085    3882 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:20.003130    3882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:3f:fb:80:d4:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/calico-021000/disk.qcow2
	I0520 08:23:20.004705    3882 main.go:141] libmachine: STDOUT: 
	I0520 08:23:20.004719    3882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:20.004731    3882 client.go:171] LocalClient.Create took 260.801916ms
	I0520 08:23:22.006987    3882 start.go:128] duration metric: createHost completed in 2.31945625s
	I0520 08:23:22.007047    3882 start.go:83] releasing machines lock for "calico-021000", held for 2.320038792s
	W0520 08:23:22.007718    3882 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:22.018390    3882 out.go:177] 
	W0520 08:23:22.021453    3882 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:23:22.021495    3882 out.go:239] * 
	* 
	W0520 08:23:22.024030    3882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:23:22.034321    3882 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.746263833s)

                                                
                                                
-- stdout --
	* [custom-flannel-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-021000 in cluster custom-flannel-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:23:24.405194    3999 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:23:24.405326    3999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:24.405329    3999 out.go:309] Setting ErrFile to fd 2...
	I0520 08:23:24.405331    3999 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:24.405406    3999 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:23:24.406449    3999 out.go:303] Setting JSON to false
	I0520 08:23:24.421526    3999 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1375,"bootTime":1684594829,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:23:24.421586    3999 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:23:24.426028    3999 out.go:177] * [custom-flannel-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:23:24.429964    3999 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:23:24.430042    3999 notify.go:220] Checking for updates...
	I0520 08:23:24.437922    3999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:23:24.440960    3999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:23:24.444892    3999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:23:24.448918    3999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:23:24.450252    3999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:23:24.453131    3999 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:23:24.453149    3999 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:23:24.456911    3999 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:23:24.461889    3999 start.go:295] selected driver: qemu2
	I0520 08:23:24.461897    3999 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:23:24.461903    3999 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:23:24.463703    3999 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:23:24.467931    3999 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:23:24.469315    3999 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:23:24.469331    3999 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 08:23:24.469345    3999 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 08:23:24.469352    3999 start_flags.go:319] config:
	{Name:custom-flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0520 08:23:24.469423    3999 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:23:24.477901    3999 out.go:177] * Starting control plane node custom-flannel-021000 in cluster custom-flannel-021000
	I0520 08:23:24.480892    3999 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:23:24.480920    3999 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:23:24.480933    3999 cache.go:57] Caching tarball of preloaded images
	I0520 08:23:24.480990    3999 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:23:24.480995    3999 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:23:24.481063    3999 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/custom-flannel-021000/config.json ...
	I0520 08:23:24.481082    3999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/custom-flannel-021000/config.json: {Name:mkc2695d0f4f981075aef6ab9e99eb8f21b6832e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:23:24.481291    3999 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:23:24.481302    3999 start.go:364] acquiring machines lock for custom-flannel-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:24.481329    3999 start.go:368] acquired machines lock for "custom-flannel-021000" in 23.083µs
	I0520 08:23:24.481345    3999 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:24.481368    3999 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:24.489858    3999 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:24.506054    3999 start.go:159] libmachine.API.Create for "custom-flannel-021000" (driver="qemu2")
	I0520 08:23:24.506074    3999 client.go:168] LocalClient.Create starting
	I0520 08:23:24.506131    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:24.506149    3999 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:24.506159    3999 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:24.506209    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:24.506223    3999 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:24.506230    3999 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:24.506907    3999 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:24.616042    3999 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:24.788979    3999 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:24.788986    3999 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:24.789146    3999 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:24.798249    3999 main.go:141] libmachine: STDOUT: 
	I0520 08:23:24.798263    3999 main.go:141] libmachine: STDERR: 
	I0520 08:23:24.798314    3999 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2 +20000M
	I0520 08:23:24.805530    3999 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:24.805552    3999 main.go:141] libmachine: STDERR: 
	I0520 08:23:24.805574    3999 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:24.805584    3999 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:24.805625    3999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:ec:5a:04:62:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:24.807199    3999 main.go:141] libmachine: STDOUT: 
	I0520 08:23:24.807214    3999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:24.807237    3999 client.go:171] LocalClient.Create took 301.159042ms
	I0520 08:23:26.809423    3999 start.go:128] duration metric: createHost completed in 2.328004416s
	I0520 08:23:26.809479    3999 start.go:83] releasing machines lock for "custom-flannel-021000", held for 2.328144167s
	W0520 08:23:26.809537    3999 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:26.822051    3999 out.go:177] * Deleting "custom-flannel-021000" in qemu2 ...
	W0520 08:23:26.841188    3999 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:26.841210    3999 start.go:702] Will try again in 5 seconds ...
	I0520 08:23:31.843446    3999 start.go:364] acquiring machines lock for custom-flannel-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:31.843935    3999 start.go:368] acquired machines lock for "custom-flannel-021000" in 403.791µs
	I0520 08:23:31.844030    3999 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:31.844367    3999 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:31.853304    3999 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:31.899746    3999 start.go:159] libmachine.API.Create for "custom-flannel-021000" (driver="qemu2")
	I0520 08:23:31.899777    3999 client.go:168] LocalClient.Create starting
	I0520 08:23:31.899901    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:31.899935    3999 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:31.899950    3999 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:31.900066    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:31.900093    3999 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:31.900108    3999 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:31.900605    3999 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:32.021621    3999 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:32.072970    3999 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:32.072976    3999 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:32.073129    3999 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:32.081467    3999 main.go:141] libmachine: STDOUT: 
	I0520 08:23:32.081480    3999 main.go:141] libmachine: STDERR: 
	I0520 08:23:32.081543    3999 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2 +20000M
	I0520 08:23:32.088732    3999 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:32.088748    3999 main.go:141] libmachine: STDERR: 
	I0520 08:23:32.088764    3999 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:32.088769    3999 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:32.088802    3999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:1f:e3:6d:bd:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/custom-flannel-021000/disk.qcow2
	I0520 08:23:32.090299    3999 main.go:141] libmachine: STDOUT: 
	I0520 08:23:32.090312    3999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:32.090324    3999 client.go:171] LocalClient.Create took 190.542458ms
	I0520 08:23:34.092645    3999 start.go:128] duration metric: createHost completed in 2.248253833s
	I0520 08:23:34.092693    3999 start.go:83] releasing machines lock for "custom-flannel-021000", held for 2.248740292s
	W0520 08:23:34.093293    3999 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:34.102796    3999 out.go:177] 
	W0520 08:23:34.106871    3999 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:23:34.106924    3999 out.go:239] * 
	* 
	W0520 08:23:34.108944    3999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:23:34.115750    3999 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p false-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0520 08:23:39.745859    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.836924792s)

                                                
                                                
-- stdout --
	* [false-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-021000 in cluster false-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:23:36.469063    4116 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:23:36.469211    4116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:36.469214    4116 out.go:309] Setting ErrFile to fd 2...
	I0520 08:23:36.469216    4116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:36.469288    4116 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:23:36.470338    4116 out.go:303] Setting JSON to false
	I0520 08:23:36.485582    4116 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1387,"bootTime":1684594829,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:23:36.485657    4116 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:23:36.490733    4116 out.go:177] * [false-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:23:36.498726    4116 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:23:36.498776    4116 notify.go:220] Checking for updates...
	I0520 08:23:36.505629    4116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:23:36.508700    4116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:23:36.512739    4116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:23:36.515708    4116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:23:36.518680    4116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:23:36.522011    4116 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:23:36.522035    4116 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:23:36.525567    4116 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:23:36.532699    4116 start.go:295] selected driver: qemu2
	I0520 08:23:36.532703    4116 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:23:36.532714    4116 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:23:36.534604    4116 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:23:36.536139    4116 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:23:36.539777    4116 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:23:36.539803    4116 cni.go:84] Creating CNI manager for "false"
	I0520 08:23:36.539812    4116 start_flags.go:319] config:
	{Name:false-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:23:36.539905    4116 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:23:36.548631    4116 out.go:177] * Starting control plane node false-021000 in cluster false-021000
	I0520 08:23:36.552669    4116 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:23:36.552691    4116 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:23:36.552716    4116 cache.go:57] Caching tarball of preloaded images
	I0520 08:23:36.552776    4116 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:23:36.552781    4116 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:23:36.552844    4116 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/false-021000/config.json ...
	I0520 08:23:36.552860    4116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/false-021000/config.json: {Name:mk7110e7995d42bda18de976fcf744e1278a6bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:23:36.553058    4116 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:23:36.553071    4116 start.go:364] acquiring machines lock for false-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:36.553101    4116 start.go:368] acquired machines lock for "false-021000" in 24.75µs
	I0520 08:23:36.553116    4116 start.go:93] Provisioning new machine with config: &{Name:false-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:36.553144    4116 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:36.559722    4116 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:36.576694    4116 start.go:159] libmachine.API.Create for "false-021000" (driver="qemu2")
	I0520 08:23:36.576715    4116 client.go:168] LocalClient.Create starting
	I0520 08:23:36.576772    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:36.576792    4116 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:36.576802    4116 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:36.576836    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:36.576852    4116 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:36.576858    4116 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:36.577181    4116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:36.690726    4116 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:36.901018    4116 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:36.901027    4116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:36.901184    4116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:36.909990    4116 main.go:141] libmachine: STDOUT: 
	I0520 08:23:36.910005    4116 main.go:141] libmachine: STDERR: 
	I0520 08:23:36.910067    4116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2 +20000M
	I0520 08:23:36.917337    4116 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:36.917355    4116 main.go:141] libmachine: STDERR: 
	I0520 08:23:36.917374    4116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:36.917379    4116 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:36.917425    4116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:82:32:0e:fb:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:36.919029    4116 main.go:141] libmachine: STDOUT: 
	I0520 08:23:36.919041    4116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:36.919061    4116 client.go:171] LocalClient.Create took 342.342958ms
	I0520 08:23:38.921297    4116 start.go:128] duration metric: createHost completed in 2.368134458s
	I0520 08:23:38.921348    4116 start.go:83] releasing machines lock for "false-021000", held for 2.368241583s
	W0520 08:23:38.921399    4116 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:38.931751    4116 out.go:177] * Deleting "false-021000" in qemu2 ...
	W0520 08:23:38.953222    4116 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:38.953254    4116 start.go:702] Will try again in 5 seconds ...
	I0520 08:23:43.955462    4116 start.go:364] acquiring machines lock for false-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:43.955909    4116 start.go:368] acquired machines lock for "false-021000" in 367.709µs
	I0520 08:23:43.956029    4116 start.go:93] Provisioning new machine with config: &{Name:false-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:43.956349    4116 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:43.965896    4116 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:44.012897    4116 start.go:159] libmachine.API.Create for "false-021000" (driver="qemu2")
	I0520 08:23:44.012941    4116 client.go:168] LocalClient.Create starting
	I0520 08:23:44.013076    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:44.013125    4116 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:44.013150    4116 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:44.013236    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:44.013275    4116 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:44.013292    4116 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:44.013770    4116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:44.137495    4116 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:44.215563    4116 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:44.215571    4116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:44.215735    4116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:44.225661    4116 main.go:141] libmachine: STDOUT: 
	I0520 08:23:44.225679    4116 main.go:141] libmachine: STDERR: 
	I0520 08:23:44.225770    4116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2 +20000M
	I0520 08:23:44.233234    4116 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:44.233246    4116 main.go:141] libmachine: STDERR: 
	I0520 08:23:44.233260    4116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:44.233271    4116 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:44.233310    4116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:9e:1c:18:8d:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/false-021000/disk.qcow2
	I0520 08:23:44.234859    4116 main.go:141] libmachine: STDOUT: 
	I0520 08:23:44.234870    4116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:44.234887    4116 client.go:171] LocalClient.Create took 221.942042ms
	I0520 08:23:46.237087    4116 start.go:128] duration metric: createHost completed in 2.280690375s
	I0520 08:23:46.237189    4116 start.go:83] releasing machines lock for "false-021000", held for 2.281260125s
	W0520 08:23:46.237888    4116 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:46.247600    4116 out.go:177] 
	W0520 08:23:46.251730    4116 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:23:46.251755    4116 out.go:239] * 
	* 
	W0520 08:23:46.254270    4116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:23:46.264649    4116 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.719266125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-021000 in cluster enable-default-cni-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:23:48.439790    4230 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:23:48.439921    4230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:48.439924    4230 out.go:309] Setting ErrFile to fd 2...
	I0520 08:23:48.439927    4230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:23:48.439998    4230 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:23:48.441051    4230 out.go:303] Setting JSON to false
	I0520 08:23:48.456017    4230 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1399,"bootTime":1684594829,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:23:48.456085    4230 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:23:48.463601    4230 out.go:177] * [enable-default-cni-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:23:48.467673    4230 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:23:48.467692    4230 notify.go:220] Checking for updates...
	I0520 08:23:48.474578    4230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:23:48.478656    4230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:23:48.482640    4230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:23:48.485636    4230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:23:48.488654    4230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:23:48.492005    4230 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:23:48.492029    4230 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:23:48.496573    4230 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:23:48.503631    4230 start.go:295] selected driver: qemu2
	I0520 08:23:48.503639    4230 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:23:48.503648    4230 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:23:48.505520    4230 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:23:48.508655    4230 out.go:177] * Automatically selected the socket_vmnet network
	E0520 08:23:48.512659    4230 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0520 08:23:48.512669    4230 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:23:48.512683    4230 cni.go:84] Creating CNI manager for "bridge"
	I0520 08:23:48.512687    4230 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:23:48.512692    4230 start_flags.go:319] config:
	{Name:enable-default-cni-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0520 08:23:48.512776    4230 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:23:48.521579    4230 out.go:177] * Starting control plane node enable-default-cni-021000 in cluster enable-default-cni-021000
	I0520 08:23:48.525669    4230 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:23:48.525697    4230 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:23:48.525710    4230 cache.go:57] Caching tarball of preloaded images
	I0520 08:23:48.525768    4230 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:23:48.525774    4230 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:23:48.525833    4230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/enable-default-cni-021000/config.json ...
	I0520 08:23:48.525846    4230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/enable-default-cni-021000/config.json: {Name:mk5962d02c74e40b85364dc3c8dabe9071a9548e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:23:48.526052    4230 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:23:48.526064    4230 start.go:364] acquiring machines lock for enable-default-cni-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:48.526095    4230 start.go:368] acquired machines lock for "enable-default-cni-021000" in 25.875µs
	I0520 08:23:48.526109    4230 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:48.526135    4230 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:48.533623    4230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:48.550915    4230 start.go:159] libmachine.API.Create for "enable-default-cni-021000" (driver="qemu2")
	I0520 08:23:48.550940    4230 client.go:168] LocalClient.Create starting
	I0520 08:23:48.550999    4230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:48.551020    4230 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:48.551033    4230 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:48.551081    4230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:48.551096    4230 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:48.551104    4230 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:48.551434    4230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:48.663617    4230 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:48.742551    4230 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:48.742556    4230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:48.742857    4230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:48.751454    4230 main.go:141] libmachine: STDOUT: 
	I0520 08:23:48.751469    4230 main.go:141] libmachine: STDERR: 
	I0520 08:23:48.751539    4230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2 +20000M
	I0520 08:23:48.758655    4230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:48.758667    4230 main.go:141] libmachine: STDERR: 
	I0520 08:23:48.758694    4230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:48.758705    4230 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:48.758746    4230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1f:2a:f0:2b:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:48.760251    4230 main.go:141] libmachine: STDOUT: 
	I0520 08:23:48.760267    4230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:48.760288    4230 client.go:171] LocalClient.Create took 209.343333ms
	I0520 08:23:50.762509    4230 start.go:128] duration metric: createHost completed in 2.236357916s
	I0520 08:23:50.762559    4230 start.go:83] releasing machines lock for "enable-default-cni-021000", held for 2.236458417s
	W0520 08:23:50.762618    4230 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:50.775025    4230 out.go:177] * Deleting "enable-default-cni-021000" in qemu2 ...
	W0520 08:23:50.796498    4230 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:50.796525    4230 start.go:702] Will try again in 5 seconds ...
	I0520 08:23:55.798846    4230 start.go:364] acquiring machines lock for enable-default-cni-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:23:55.799357    4230 start.go:368] acquired machines lock for "enable-default-cni-021000" in 402.875µs
	I0520 08:23:55.799461    4230 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:23:55.799770    4230 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:23:55.808707    4230 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:23:55.857440    4230 start.go:159] libmachine.API.Create for "enable-default-cni-021000" (driver="qemu2")
	I0520 08:23:55.857481    4230 client.go:168] LocalClient.Create starting
	I0520 08:23:55.857602    4230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:23:55.857650    4230 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:55.857681    4230 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:55.857763    4230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:23:55.857795    4230 main.go:141] libmachine: Decoding PEM data...
	I0520 08:23:55.857813    4230 main.go:141] libmachine: Parsing certificate...
	I0520 08:23:55.858374    4230 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:23:55.989892    4230 main.go:141] libmachine: Creating SSH key...
	I0520 08:23:56.069677    4230 main.go:141] libmachine: Creating Disk image...
	I0520 08:23:56.069684    4230 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:23:56.069846    4230 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:56.078280    4230 main.go:141] libmachine: STDOUT: 
	I0520 08:23:56.078292    4230 main.go:141] libmachine: STDERR: 
	I0520 08:23:56.078355    4230 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2 +20000M
	I0520 08:23:56.085466    4230 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:23:56.085478    4230 main.go:141] libmachine: STDERR: 
	I0520 08:23:56.085492    4230 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:56.085495    4230 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:23:56.085534    4230 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:56:b5:4d:88:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/enable-default-cni-021000/disk.qcow2
	I0520 08:23:56.086989    4230 main.go:141] libmachine: STDOUT: 
	I0520 08:23:56.087000    4230 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:23:56.087012    4230 client.go:171] LocalClient.Create took 229.527541ms
	I0520 08:23:58.089166    4230 start.go:128] duration metric: createHost completed in 2.289351166s
	I0520 08:23:58.089227    4230 start.go:83] releasing machines lock for "enable-default-cni-021000", held for 2.289848667s
	W0520 08:23:58.089972    4230 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:23:58.101399    4230 out.go:177] 
	W0520 08:23:58.105642    4230 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:23:58.105674    4230 out.go:239] * 
	* 
	W0520 08:23:58.108237    4230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:23:58.117521    4230 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.849783667s)

                                                
                                                
-- stdout --
	* [flannel-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-021000 in cluster flannel-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:00.296914    4339 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:00.297065    4339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:00.297067    4339 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:00.297070    4339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:00.297138    4339 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:00.298197    4339 out.go:303] Setting JSON to false
	I0520 08:24:00.313444    4339 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1411,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:00.313502    4339 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:00.318836    4339 out.go:177] * [flannel-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:00.325772    4339 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:00.325802    4339 notify.go:220] Checking for updates...
	I0520 08:24:00.333613    4339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:00.336848    4339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:00.339829    4339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:00.346602    4339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:00.349841    4339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:00.353173    4339 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:00.353199    4339 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:00.356807    4339 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:00.363798    4339 start.go:295] selected driver: qemu2
	I0520 08:24:00.363803    4339 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:00.363811    4339 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:00.365815    4339 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:00.368861    4339 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:00.372824    4339 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:00.372842    4339 cni.go:84] Creating CNI manager for "flannel"
	I0520 08:24:00.372845    4339 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0520 08:24:00.372854    4339 start_flags.go:319] config:
	{Name:flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:00.372939    4339 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:00.380774    4339 out.go:177] * Starting control plane node flannel-021000 in cluster flannel-021000
	I0520 08:24:00.384772    4339 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:00.384798    4339 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:00.384813    4339 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:00.384870    4339 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:00.384876    4339 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:00.384951    4339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/flannel-021000/config.json ...
	I0520 08:24:00.384968    4339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/flannel-021000/config.json: {Name:mk6590143ef60733add53c5cc4c1f887ce1f9768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:00.385162    4339 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:00.385173    4339 start.go:364] acquiring machines lock for flannel-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:00.385207    4339 start.go:368] acquired machines lock for "flannel-021000" in 24.917µs
	I0520 08:24:00.385222    4339 start.go:93] Provisioning new machine with config: &{Name:flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:00.385252    4339 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:00.393794    4339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:00.410617    4339 start.go:159] libmachine.API.Create for "flannel-021000" (driver="qemu2")
	I0520 08:24:00.410644    4339 client.go:168] LocalClient.Create starting
	I0520 08:24:00.410719    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:00.410746    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:00.410761    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:00.410822    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:00.410837    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:00.410845    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:00.411205    4339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:00.525190    4339 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:00.718242    4339 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:00.718248    4339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:00.718408    4339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.727480    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:00.727495    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:00.727557    4339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2 +20000M
	I0520 08:24:00.734739    4339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:00.734751    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:00.734766    4339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.734773    4339 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:00.734807    4339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:7a:e9:67:26:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.736351    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:00.736363    4339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:00.736382    4339 client.go:171] LocalClient.Create took 325.734208ms
	I0520 08:24:02.738545    4339 start.go:128] duration metric: createHost completed in 2.353275583s
	I0520 08:24:02.738617    4339 start.go:83] releasing machines lock for "flannel-021000", held for 2.353404541s
	W0520 08:24:02.738705    4339 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:02.750097    4339 out.go:177] * Deleting "flannel-021000" in qemu2 ...
	W0520 08:24:02.769895    4339 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:02.769915    4339 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:07.772152    4339 start.go:364] acquiring machines lock for flannel-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:07.772766    4339 start.go:368] acquired machines lock for "flannel-021000" in 493.583µs
	I0520 08:24:07.772930    4339 start.go:93] Provisioning new machine with config: &{Name:flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:07.773177    4339 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:07.783114    4339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:07.831274    4339 start.go:159] libmachine.API.Create for "flannel-021000" (driver="qemu2")
	I0520 08:24:07.831320    4339 client.go:168] LocalClient.Create starting
	I0520 08:24:07.831459    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:07.831500    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:07.831519    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:07.831587    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:07.831615    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:07.831632    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:07.832337    4339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:07.956549    4339 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:08.062053    4339 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:08.062061    4339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:08.062220    4339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:08.070681    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:08.070694    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:08.070744    4339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2 +20000M
	I0520 08:24:08.077935    4339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:08.077945    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:08.077960    4339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:08.077965    4339 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:08.078011    4339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:28:a7:03:98:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:08.079512    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:08.079525    4339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:08.079537    4339 client.go:171] LocalClient.Create took 248.208542ms
	I0520 08:24:10.081712    4339 start.go:128] duration metric: createHost completed in 2.30848175s
	I0520 08:24:10.081759    4339 start.go:83] releasing machines lock for "flannel-021000", held for 2.308973875s
	W0520 08:24:10.082314    4339 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:10.090016    4339 out.go:177] 
	W0520 08:24:10.095113    4339 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:10.095136    4339 out.go:239] * 
	* 
	W0520 08:24:10.097646    4339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:10.105929    4339 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe: permission denied (1.629667ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe: permission denied (5.345958ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe start -p stopped-upgrade-888000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe: permission denied (5.977209ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.389184646.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-888000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-888000: exit status 85 (113.88175ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | status kubelet --all --full                          |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cat kubelet --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo                                 | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo docker                          | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo                                 | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo cat                             | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo                                 | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo systemctl                       | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo find                            | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p false-021000 sudo crio                            | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p false-021000                                      | false-021000              | jenkins | v1.30.1 | 20 May 23 08:23 PDT | 20 May 23 08:23 PDT |
	| start   | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat /etc/nsswitch.conf                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat /etc/hosts                                  |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat /etc/resolv.conf                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo crictl pods                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo crictl ps --all                                 |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo find /etc/cni -type f                           |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo ip a s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo ip r s                                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo iptables-save                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000 sudo cat                | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000 sudo cat                | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000 sudo cat                | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:23 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:24 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:24 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:24 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:24 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-021000                         | enable-default-cni-021000 | jenkins | v1.30.1 | 20 May 23 08:24 PDT | 20 May 23 08:24 PDT |
	| start   | -p flannel-021000                                    | flannel-021000            | jenkins | v1.30.1 | 20 May 23 08:24 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=qemu2                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:24:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:24:00.296914    4339 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:00.297065    4339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:00.297067    4339 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:00.297070    4339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:00.297138    4339 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:00.298197    4339 out.go:303] Setting JSON to false
	I0520 08:24:00.313444    4339 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1411,"bootTime":1684594829,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:00.313502    4339 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:00.318836    4339 out.go:177] * [flannel-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:00.325772    4339 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:00.325802    4339 notify.go:220] Checking for updates...
	I0520 08:24:00.333613    4339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:00.336848    4339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:00.339829    4339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:00.346602    4339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:00.349841    4339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:00.353173    4339 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:00.353199    4339 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:00.356807    4339 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:00.363798    4339 start.go:295] selected driver: qemu2
	I0520 08:24:00.363803    4339 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:00.363811    4339 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:00.365815    4339 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:00.368861    4339 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:00.372824    4339 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:00.372842    4339 cni.go:84] Creating CNI manager for "flannel"
	I0520 08:24:00.372845    4339 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0520 08:24:00.372854    4339 start_flags.go:319] config:
	{Name:flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:00.372939    4339 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:00.380774    4339 out.go:177] * Starting control plane node flannel-021000 in cluster flannel-021000
	I0520 08:24:00.384772    4339 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:00.384798    4339 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:00.384813    4339 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:00.384870    4339 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:00.384876    4339 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:00.384951    4339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/flannel-021000/config.json ...
	I0520 08:24:00.384968    4339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/flannel-021000/config.json: {Name:mk6590143ef60733add53c5cc4c1f887ce1f9768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:00.385162    4339 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:00.385173    4339 start.go:364] acquiring machines lock for flannel-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:00.385207    4339 start.go:368] acquired machines lock for "flannel-021000" in 24.917µs
	I0520 08:24:00.385222    4339 start.go:93] Provisioning new machine with config: &{Name:flannel-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:00.385252    4339 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:00.393794    4339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:00.410617    4339 start.go:159] libmachine.API.Create for "flannel-021000" (driver="qemu2")
	I0520 08:24:00.410644    4339 client.go:168] LocalClient.Create starting
	I0520 08:24:00.410719    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:00.410746    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:00.410761    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:00.410822    4339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:00.410837    4339 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:00.410845    4339 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:00.411205    4339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:00.525190    4339 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:00.718242    4339 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:00.718248    4339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:00.718408    4339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.727480    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:00.727495    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:00.727557    4339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2 +20000M
	I0520 08:24:00.734739    4339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:00.734751    4339 main.go:141] libmachine: STDERR: 
	I0520 08:24:00.734766    4339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.734773    4339 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:00.734807    4339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:7a:e9:67:26:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/flannel-021000/disk.qcow2
	I0520 08:24:00.736351    4339 main.go:141] libmachine: STDOUT: 
	I0520 08:24:00.736363    4339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:00.736382    4339 client.go:171] LocalClient.Create took 325.734208ms
	I0520 08:24:02.738545    4339 start.go:128] duration metric: createHost completed in 2.353275583s
	I0520 08:24:02.738617    4339 start.go:83] releasing machines lock for "flannel-021000", held for 2.353404541s
	W0520 08:24:02.738705    4339 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:02.750097    4339 out.go:177] * Deleting "flannel-021000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-888000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-888000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.696059417s)

                                                
                                                
-- stdout --
	* [bridge-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-021000 in cluster bridge-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:03.428192    4367 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:03.428327    4367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:03.428330    4367 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:03.428332    4367 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:03.428394    4367 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:03.429450    4367 out.go:303] Setting JSON to false
	I0520 08:24:03.444391    4367 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1414,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:03.444464    4367 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:03.448964    4367 out.go:177] * [bridge-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:03.455885    4367 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:03.455948    4367 notify.go:220] Checking for updates...
	I0520 08:24:03.463852    4367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:03.466933    4367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:03.468343    4367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:03.471823    4367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:03.474943    4367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:03.478168    4367 config.go:182] Loaded profile config "flannel-021000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:03.478227    4367 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:03.478246    4367 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:03.482807    4367 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:03.489862    4367 start.go:295] selected driver: qemu2
	I0520 08:24:03.489866    4367 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:03.489872    4367 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:03.491697    4367 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:03.495820    4367 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:03.498936    4367 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:03.498960    4367 cni.go:84] Creating CNI manager for "bridge"
	I0520 08:24:03.498965    4367 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:24:03.498972    4367 start_flags.go:319] config:
	{Name:bridge-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:bridge-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:03.499054    4367 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:03.506794    4367 out.go:177] * Starting control plane node bridge-021000 in cluster bridge-021000
	I0520 08:24:03.510861    4367 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:03.510884    4367 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:03.510898    4367 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:03.510961    4367 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:03.510966    4367 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:03.511023    4367 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/bridge-021000/config.json ...
	I0520 08:24:03.511035    4367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/bridge-021000/config.json: {Name:mk1e8ef27e3887c2b1425c051e63b11e808d7b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:03.511230    4367 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:03.511242    4367 start.go:364] acquiring machines lock for bridge-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:03.511271    4367 start.go:368] acquired machines lock for "bridge-021000" in 24.75µs
	I0520 08:24:03.511289    4367 start.go:93] Provisioning new machine with config: &{Name:bridge-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:03.511315    4367 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:03.519865    4367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:03.537364    4367 start.go:159] libmachine.API.Create for "bridge-021000" (driver="qemu2")
	I0520 08:24:03.537384    4367 client.go:168] LocalClient.Create starting
	I0520 08:24:03.537444    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:03.537468    4367 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:03.537486    4367 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:03.537517    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:03.537532    4367 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:03.537539    4367 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:03.537864    4367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:03.652130    4367 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:03.747387    4367 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:03.747394    4367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:03.747583    4367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:03.756089    4367 main.go:141] libmachine: STDOUT: 
	I0520 08:24:03.756102    4367 main.go:141] libmachine: STDERR: 
	I0520 08:24:03.756153    4367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2 +20000M
	I0520 08:24:03.763298    4367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:03.763310    4367 main.go:141] libmachine: STDERR: 
	I0520 08:24:03.763330    4367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:03.763336    4367 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:03.763372    4367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bb:2c:b6:a3:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:03.764922    4367 main.go:141] libmachine: STDOUT: 
	I0520 08:24:03.764936    4367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:03.764953    4367 client.go:171] LocalClient.Create took 227.561333ms
	I0520 08:24:05.767156    4367 start.go:128] duration metric: createHost completed in 2.255810958s
	I0520 08:24:05.767244    4367 start.go:83] releasing machines lock for "bridge-021000", held for 2.255967041s
	W0520 08:24:05.767312    4367 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:05.773852    4367 out.go:177] * Deleting "bridge-021000" in qemu2 ...
	W0520 08:24:05.793655    4367 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:05.793682    4367 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:10.795749    4367 start.go:364] acquiring machines lock for bridge-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:10.795842    4367 start.go:368] acquired machines lock for "bridge-021000" in 58.958µs
	I0520 08:24:10.795872    4367 start.go:93] Provisioning new machine with config: &{Name:bridge-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:10.795921    4367 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:10.805184    4367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:10.819488    4367 start.go:159] libmachine.API.Create for "bridge-021000" (driver="qemu2")
	I0520 08:24:10.819506    4367 client.go:168] LocalClient.Create starting
	I0520 08:24:10.819560    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:10.819585    4367 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:10.819595    4367 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:10.819636    4367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:10.819651    4367 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:10.819659    4367 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:10.819931    4367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:10.975590    4367 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:11.037621    4367 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:11.037631    4367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:11.037838    4367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:11.047133    4367 main.go:141] libmachine: STDOUT: 
	I0520 08:24:11.047151    4367 main.go:141] libmachine: STDERR: 
	I0520 08:24:11.047221    4367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2 +20000M
	I0520 08:24:11.055437    4367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:11.055472    4367 main.go:141] libmachine: STDERR: 
	I0520 08:24:11.055486    4367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:11.055493    4367 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:11.055537    4367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:52:07:4f:87:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/bridge-021000/disk.qcow2
	I0520 08:24:11.057108    4367 main.go:141] libmachine: STDOUT: 
	I0520 08:24:11.057123    4367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:11.057137    4367 client.go:171] LocalClient.Create took 237.627583ms
	I0520 08:24:13.059351    4367 start.go:128] duration metric: createHost completed in 2.263365792s
	I0520 08:24:13.059419    4367 start.go:83] releasing machines lock for "bridge-021000", held for 2.263569334s
	W0520 08:24:13.060072    4367 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:13.073611    4367 out.go:177] 
	W0520 08:24:13.077876    4367 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:13.077908    4367 out.go:239] * 
	* 
	W0520 08:24:13.079788    4367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:13.088576    4367 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-021000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.347879167s)

                                                
                                                
-- stdout --
	* [kubenet-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-021000 in cluster kubenet-021000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:12.497226    4488 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:12.497353    4488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:12.497356    4488 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:12.497358    4488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:12.497429    4488 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:12.498542    4488 out.go:303] Setting JSON to false
	I0520 08:24:12.513652    4488 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1423,"bootTime":1684594829,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:12.513726    4488 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:12.518982    4488 out.go:177] * [kubenet-021000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:12.527017    4488 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:12.527019    4488 notify.go:220] Checking for updates...
	I0520 08:24:12.532943    4488 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:12.536012    4488 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:12.539998    4488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:12.542962    4488 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:12.545952    4488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:12.549346    4488 config.go:182] Loaded profile config "bridge-021000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:12.549415    4488 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:12.549435    4488 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:12.552947    4488 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:12.559967    4488 start.go:295] selected driver: qemu2
	I0520 08:24:12.559973    4488 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:12.559978    4488 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:12.561858    4488 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:12.564829    4488 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:12.568073    4488 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:12.568096    4488 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0520 08:24:12.568099    4488 start_flags.go:319] config:
	{Name:kubenet-021000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:12.568188    4488 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:12.576958    4488 out.go:177] * Starting control plane node kubenet-021000 in cluster kubenet-021000
	I0520 08:24:12.580931    4488 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:12.580953    4488 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:12.580971    4488 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:12.581031    4488 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:12.581036    4488 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:12.581093    4488 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kubenet-021000/config.json ...
	I0520 08:24:12.581109    4488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/kubenet-021000/config.json: {Name:mkff7fe74a83abad986870f57271c7f2e6daf416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:12.581313    4488 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:12.581325    4488 start.go:364] acquiring machines lock for kubenet-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:13.059614    4488 start.go:368] acquired machines lock for "kubenet-021000" in 478.235666ms
	I0520 08:24:13.059785    4488 start.go:93] Provisioning new machine with config: &{Name:kubenet-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:13.060052    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:13.069649    4488 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:13.115634    4488 start.go:159] libmachine.API.Create for "kubenet-021000" (driver="qemu2")
	I0520 08:24:13.115683    4488 client.go:168] LocalClient.Create starting
	I0520 08:24:13.115811    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:13.115847    4488 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:13.115865    4488 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:13.115939    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:13.115970    4488 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:13.115984    4488 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:13.116551    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:13.241061    4488 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:13.367872    4488 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:13.367883    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:13.368067    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:13.377341    4488 main.go:141] libmachine: STDOUT: 
	I0520 08:24:13.377361    4488 main.go:141] libmachine: STDERR: 
	I0520 08:24:13.377426    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2 +20000M
	I0520 08:24:13.385635    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:13.385667    4488 main.go:141] libmachine: STDERR: 
	I0520 08:24:13.385698    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:13.385710    4488 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:13.385757    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:41:cf:ba:c2:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:13.387617    4488 main.go:141] libmachine: STDOUT: 
	I0520 08:24:13.387629    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:13.387651    4488 client.go:171] LocalClient.Create took 271.960459ms
	I0520 08:24:15.389339    4488 start.go:128] duration metric: createHost completed in 2.329279042s
	I0520 08:24:15.389363    4488 start.go:83] releasing machines lock for "kubenet-021000", held for 2.329701209s
	W0520 08:24:15.389383    4488 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:15.403835    4488 out.go:177] * Deleting "kubenet-021000" in qemu2 ...
	W0520 08:24:15.413063    4488 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:15.413072    4488 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:20.415212    4488 start.go:364] acquiring machines lock for kubenet-021000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:20.415734    4488 start.go:368] acquired machines lock for "kubenet-021000" in 436.291µs
	I0520 08:24:20.415900    4488 start.go:93] Provisioning new machine with config: &{Name:kubenet-021000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-021000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:20.416180    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:20.426857    4488 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 08:24:20.474218    4488 start.go:159] libmachine.API.Create for "kubenet-021000" (driver="qemu2")
	I0520 08:24:20.474273    4488 client.go:168] LocalClient.Create starting
	I0520 08:24:20.474418    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:20.474485    4488 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:20.474521    4488 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:20.474607    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:20.474644    4488 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:20.474660    4488 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:20.475171    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:20.618334    4488 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:20.759437    4488 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:20.759444    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:20.759599    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:20.768404    4488 main.go:141] libmachine: STDOUT: 
	I0520 08:24:20.768419    4488 main.go:141] libmachine: STDERR: 
	I0520 08:24:20.768480    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2 +20000M
	I0520 08:24:20.775717    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:20.775728    4488 main.go:141] libmachine: STDERR: 
	I0520 08:24:20.775742    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:20.775749    4488 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:20.775786    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:d0:0a:3e:21:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/kubenet-021000/disk.qcow2
	I0520 08:24:20.777379    4488 main.go:141] libmachine: STDOUT: 
	I0520 08:24:20.777393    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:20.777408    4488 client.go:171] LocalClient.Create took 303.128041ms
	I0520 08:24:22.779568    4488 start.go:128] duration metric: createHost completed in 2.363368792s
	I0520 08:24:22.779636    4488 start.go:83] releasing machines lock for "kubenet-021000", held for 2.36388425s
	W0520 08:24:22.780203    4488 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:22.792630    4488 out.go:177] 
	W0520 08:24:22.797061    4488 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:22.797096    4488 out.go:239] * 
	* 
	W0520 08:24:22.799156    4488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:22.808588    4488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.889139084s)

                                                
                                                
-- stdout --
	* [old-k8s-version-464000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-464000 in cluster old-k8s-version-464000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-464000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:15.240088    4598 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:15.240232    4598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:15.240235    4598 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:15.240238    4598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:15.240313    4598 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:15.241357    4598 out.go:303] Setting JSON to false
	I0520 08:24:15.256352    4598 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1426,"bootTime":1684594829,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:15.256433    4598 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:15.261605    4598 out.go:177] * [old-k8s-version-464000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:15.268663    4598 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:15.268734    4598 notify.go:220] Checking for updates...
	I0520 08:24:15.275561    4598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:15.278543    4598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:15.282550    4598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:15.285581    4598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:15.288498    4598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:15.291976    4598 config.go:182] Loaded profile config "kubenet-021000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:15.292039    4598 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:15.292059    4598 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:15.295563    4598 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:15.302594    4598 start.go:295] selected driver: qemu2
	I0520 08:24:15.302598    4598 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:15.302605    4598 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:15.304424    4598 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:15.308562    4598 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:15.311628    4598 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:15.311652    4598 cni.go:84] Creating CNI manager for ""
	I0520 08:24:15.311659    4598 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:24:15.311663    4598 start_flags.go:319] config:
	{Name:old-k8s-version-464000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:15.311745    4598 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:15.316539    4598 out.go:177] * Starting control plane node old-k8s-version-464000 in cluster old-k8s-version-464000
	I0520 08:24:15.324548    4598 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:24:15.324874    4598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:24:15.324887    4598 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:15.324952    4598 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:15.324959    4598 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0520 08:24:15.325057    4598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/old-k8s-version-464000/config.json ...
	I0520 08:24:15.325072    4598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/old-k8s-version-464000/config.json: {Name:mke408ef5453573d619608c6b41f23047b87d67e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:15.325381    4598 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:15.325399    4598 start.go:364] acquiring machines lock for old-k8s-version-464000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:15.389462    4598 start.go:368] acquired machines lock for "old-k8s-version-464000" in 64.021708ms
	I0520 08:24:15.389502    4598 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:15.389563    4598 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:15.396824    4598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:15.420359    4598 start.go:159] libmachine.API.Create for "old-k8s-version-464000" (driver="qemu2")
	I0520 08:24:15.420383    4598 client.go:168] LocalClient.Create starting
	I0520 08:24:15.420472    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:15.420500    4598 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:15.420515    4598 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:15.420573    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:15.420591    4598 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:15.420602    4598 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:15.421063    4598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:15.538468    4598 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:15.603204    4598 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:15.603209    4598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:15.603349    4598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:15.611985    4598 main.go:141] libmachine: STDOUT: 
	I0520 08:24:15.612008    4598 main.go:141] libmachine: STDERR: 
	I0520 08:24:15.612101    4598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2 +20000M
	I0520 08:24:15.619236    4598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:15.619249    4598 main.go:141] libmachine: STDERR: 
	I0520 08:24:15.619266    4598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:15.619278    4598 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:15.619310    4598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:82:a8:1d:58:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:15.620816    4598 main.go:141] libmachine: STDOUT: 
	I0520 08:24:15.620829    4598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:15.620854    4598 client.go:171] LocalClient.Create took 200.464084ms
	I0520 08:24:17.623042    4598 start.go:128] duration metric: createHost completed in 2.233457042s
	I0520 08:24:17.623132    4598 start.go:83] releasing machines lock for "old-k8s-version-464000", held for 2.2336535s
	W0520 08:24:17.623219    4598 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:17.635980    4598 out.go:177] * Deleting "old-k8s-version-464000" in qemu2 ...
	W0520 08:24:17.656267    4598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:17.656294    4598 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:22.658531    4598 start.go:364] acquiring machines lock for old-k8s-version-464000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:22.779729    4598 start.go:368] acquired machines lock for "old-k8s-version-464000" in 121.103291ms
	I0520 08:24:22.779896    4598 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:22.780088    4598 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:22.788686    4598 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:22.833609    4598 start.go:159] libmachine.API.Create for "old-k8s-version-464000" (driver="qemu2")
	I0520 08:24:22.833653    4598 client.go:168] LocalClient.Create starting
	I0520 08:24:22.833790    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:22.833845    4598 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:22.833865    4598 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:22.833932    4598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:22.833960    4598 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:22.833976    4598 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:22.834487    4598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:22.971111    4598 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:23.049651    4598 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:23.049661    4598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:23.049820    4598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:23.058797    4598 main.go:141] libmachine: STDOUT: 
	I0520 08:24:23.058814    4598 main.go:141] libmachine: STDERR: 
	I0520 08:24:23.058880    4598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2 +20000M
	I0520 08:24:23.066705    4598 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:23.066720    4598 main.go:141] libmachine: STDERR: 
	I0520 08:24:23.066742    4598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:23.066750    4598 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:23.066788    4598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d3:eb:62:84:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:23.068453    4598 main.go:141] libmachine: STDOUT: 
	I0520 08:24:23.068467    4598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:23.068480    4598 client.go:171] LocalClient.Create took 234.8225ms
	I0520 08:24:25.068568    4598 start.go:128] duration metric: createHost completed in 2.288443333s
	I0520 08:24:25.068581    4598 start.go:83] releasing machines lock for "old-k8s-version-464000", held for 2.288836083s
	W0520 08:24:25.068721    4598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:25.079678    4598 out.go:177] 
	W0520 08:24:25.082675    4598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:25.082688    4598 out.go:239] * 
	* 
	W0520 08:24:25.083239    4598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:25.093619    4598 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (34.089583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.838874416s)

                                                
                                                
-- stdout --
	* [no-preload-641000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-641000 in cluster no-preload-641000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-641000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:24.951291    4713 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:24.951457    4713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:24.951460    4713 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:24.951462    4713 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:24.951530    4713 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:24.952574    4713 out.go:303] Setting JSON to false
	I0520 08:24:24.967569    4713 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1435,"bootTime":1684594829,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:24.967643    4713 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:24.972688    4713 out.go:177] * [no-preload-641000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:24.980700    4713 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:24.980764    4713 notify.go:220] Checking for updates...
	I0520 08:24:24.987645    4713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:24.989122    4713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:24.997661    4713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:24.999151    4713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:25.005641    4713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:25.009873    4713 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:25.009952    4713 config.go:182] Loaded profile config "old-k8s-version-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0520 08:24:25.009972    4713 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:25.014648    4713 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:25.019075    4713 start.go:295] selected driver: qemu2
	I0520 08:24:25.019084    4713 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:25.019092    4713 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:25.021019    4713 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:25.024601    4713 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:25.028685    4713 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:25.028703    4713 cni.go:84] Creating CNI manager for ""
	I0520 08:24:25.028709    4713 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:25.028713    4713 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:24:25.028720    4713 start_flags.go:319] config:
	{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-641000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:25.028792    4713 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.037633    4713 out.go:177] * Starting control plane node no-preload-641000 in cluster no-preload-641000
	I0520 08:24:25.041665    4713 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:25.041748    4713 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/no-preload-641000/config.json ...
	I0520 08:24:25.041764    4713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/no-preload-641000/config.json: {Name:mk3e138f05d54ee8b5939692b1b7179f71f7926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:25.041792    4713 cache.go:107] acquiring lock: {Name:mk1c20507876e80963a4184c3ad3a7ef44077016 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041791    4713 cache.go:107] acquiring lock: {Name:mk16b62ad5d32a020aeda5397f7794760ecffb2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041808    4713 cache.go:107] acquiring lock: {Name:mkef9ad2f4e7096c73a5f449ffd595a3adfcd9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041881    4713 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 08:24:25.041888    4713 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.375µs
	I0520 08:24:25.041908    4713 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 08:24:25.041917    4713 cache.go:107] acquiring lock: {Name:mk9934d56a9d50e87ac2fbd4847ca5a59099f651 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041921    4713 cache.go:107] acquiring lock: {Name:mk811e1b5b242888cb17f3435c4f4263f0384e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041954    4713 cache.go:107] acquiring lock: {Name:mk5caa061f40481417234c0803936d5dd5376e91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.041955    4713 cache.go:107] acquiring lock: {Name:mkde411108e981f49f11f9d8950d16804716b18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.042038    4713 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.2
	I0520 08:24:25.042050    4713 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 08:24:25.042050    4713 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.2
	I0520 08:24:25.042101    4713 cache.go:107] acquiring lock: {Name:mk12f664196753926beb77b61e2096e57b9c03aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.042149    4713 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0520 08:24:25.042163    4713 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.2
	I0520 08:24:25.042211    4713 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0520 08:24:25.042284    4713 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:25.042308    4713 start.go:364] acquiring machines lock for no-preload-641000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:25.042430    4713 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0520 08:24:25.057420    4713 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.2
	I0520 08:24:25.059021    4713 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.2
	I0520 08:24:25.063149    4713 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 08:24:25.063453    4713 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0520 08:24:25.065728    4713 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.2
	I0520 08:24:25.066733    4713 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0520 08:24:25.066885    4713 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0520 08:24:25.068616    4713 start.go:368] acquired machines lock for "no-preload-641000" in 26.299959ms
	I0520 08:24:25.068651    4713 start.go:93] Provisioning new machine with config: &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-641000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:25.068702    4713 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:25.076619    4713 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:25.091143    4713 start.go:159] libmachine.API.Create for "no-preload-641000" (driver="qemu2")
	I0520 08:24:25.091178    4713 client.go:168] LocalClient.Create starting
	I0520 08:24:25.091239    4713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:25.091258    4713 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:25.091269    4713 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:25.091317    4713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:25.091331    4713 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:25.091339    4713 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:25.098070    4713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:25.231440    4713 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:25.289728    4713 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:25.289738    4713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:25.289912    4713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:25.299444    4713 main.go:141] libmachine: STDOUT: 
	I0520 08:24:25.299466    4713 main.go:141] libmachine: STDERR: 
	I0520 08:24:25.299526    4713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2 +20000M
	I0520 08:24:25.307806    4713 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:25.307823    4713 main.go:141] libmachine: STDERR: 
	I0520 08:24:25.307845    4713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:25.307865    4713 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:25.307917    4713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:48:6a:b0:ab:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:25.309861    4713 main.go:141] libmachine: STDOUT: 
	I0520 08:24:25.309875    4713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:25.309902    4713 client.go:171] LocalClient.Create took 218.719625ms
	I0520 08:24:26.268597    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2
	I0520 08:24:26.272756    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2
	I0520 08:24:26.310147    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0520 08:24:26.441890    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 08:24:26.441913    4713 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.399999666s
	I0520 08:24:26.441922    4713 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 08:24:26.511424    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2
	I0520 08:24:26.517457    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2
	I0520 08:24:26.696314    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0520 08:24:26.890008    4713 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0520 08:24:27.310218    4713 start.go:128] duration metric: createHost completed in 2.241486375s
	I0520 08:24:27.310260    4713 start.go:83] releasing machines lock for "no-preload-641000", held for 2.241635166s
	W0520 08:24:27.310319    4713 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:27.326932    4713 out.go:177] * Deleting "no-preload-641000" in qemu2 ...
	W0520 08:24:27.349135    4713 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:27.349166    4713 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:28.289313    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0520 08:24:28.289359    4713 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.247342834s
	I0520 08:24:28.289417    4713 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0520 08:24:29.404152    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0520 08:24:29.404216    4713 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 4.362425709s
	I0520 08:24:29.404247    4713 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0520 08:24:29.740647    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0520 08:24:29.740692    4713 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 4.698742417s
	I0520 08:24:29.740754    4713 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0520 08:24:30.699181    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0520 08:24:30.699229    4713 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 5.657353042s
	I0520 08:24:30.699256    4713 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0520 08:24:32.306778    4713 cache.go:157] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0520 08:24:32.306820    4713 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 7.265051791s
	I0520 08:24:32.306846    4713 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0520 08:24:32.358578    4713 start.go:364] acquiring machines lock for no-preload-641000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:32.367615    4713 start.go:368] acquired machines lock for "no-preload-641000" in 8.9905ms
	I0520 08:24:32.367680    4713 start.go:93] Provisioning new machine with config: &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-641000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:32.367897    4713 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:32.376939    4713 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:32.416094    4713 start.go:159] libmachine.API.Create for "no-preload-641000" (driver="qemu2")
	I0520 08:24:32.416125    4713 client.go:168] LocalClient.Create starting
	I0520 08:24:32.416266    4713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:32.416305    4713 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:32.416322    4713 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:32.416402    4713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:32.416431    4713 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:32.416448    4713 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:32.416884    4713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:32.537107    4713 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:32.699317    4713 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:32.699326    4713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:32.699483    4713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:32.708636    4713 main.go:141] libmachine: STDOUT: 
	I0520 08:24:32.708658    4713 main.go:141] libmachine: STDERR: 
	I0520 08:24:32.708730    4713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2 +20000M
	I0520 08:24:32.716910    4713 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:32.716931    4713 main.go:141] libmachine: STDERR: 
	I0520 08:24:32.716945    4713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:32.716954    4713 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:32.717010    4713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b6:2a:d8:2a:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:32.718634    4713 main.go:141] libmachine: STDOUT: 
	I0520 08:24:32.718651    4713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:32.718663    4713 client.go:171] LocalClient.Create took 302.53375ms
	I0520 08:24:34.719462    4713 start.go:128] duration metric: createHost completed in 2.351541417s
	I0520 08:24:34.719508    4713 start.go:83] releasing machines lock for "no-preload-641000", held for 2.351865334s
	W0520 08:24:34.719832    4713 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:34.737235    4713 out.go:177] 
	W0520 08:24:34.741459    4713 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:34.741490    4713 out.go:239] * 
	* 
	W0520 08:24:34.744055    4713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:34.753281    4713 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (48.581084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-464000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-464000 create -f testdata/busybox.yaml: exit status 1 (30.570375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-464000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-464000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (34.411125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (33.941125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-464000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-464000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-464000 describe deploy/metrics-server -n kube-system: exit status 1 (27.975167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-464000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-464000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (29.015917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.901018542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-464000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-464000 in cluster old-k8s-version-464000
	* Restarting existing qemu2 VM for "old-k8s-version-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-464000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:25.533145    4770 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:25.533278    4770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:25.533281    4770 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:25.533283    4770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:25.533352    4770 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:25.534361    4770 out.go:303] Setting JSON to false
	I0520 08:24:25.550430    4770 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1436,"bootTime":1684594829,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:25.550518    4770 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:25.554626    4770 out.go:177] * [old-k8s-version-464000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:25.561584    4770 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:25.561651    4770 notify.go:220] Checking for updates...
	I0520 08:24:25.569479    4770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:25.572672    4770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:25.575661    4770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:25.582610    4770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:25.590449    4770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:25.593929    4770 config.go:182] Loaded profile config "old-k8s-version-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0520 08:24:25.597618    4770 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0520 08:24:25.600594    4770 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:25.604646    4770 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:24:25.612629    4770 start.go:295] selected driver: qemu2
	I0520 08:24:25.612638    4770 start.go:870] validating driver "qemu2" against &{Name:old-k8s-version-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-464000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:25.612694    4770 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:25.614219    4770 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:25.614246    4770 cni.go:84] Creating CNI manager for ""
	I0520 08:24:25.614253    4770 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:24:25.614259    4770 start_flags.go:319] config:
	{Name:old-k8s-version-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-464000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:25.614325    4770 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:25.621637    4770 out.go:177] * Starting control plane node old-k8s-version-464000 in cluster old-k8s-version-464000
	I0520 08:24:25.625608    4770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:24:25.625638    4770 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:24:25.625652    4770 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:25.625706    4770 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:25.625711    4770 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0520 08:24:25.625767    4770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/old-k8s-version-464000/config.json ...
	I0520 08:24:25.626057    4770 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:25.626068    4770 start.go:364] acquiring machines lock for old-k8s-version-464000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:27.310421    4770 start.go:368] acquired machines lock for "old-k8s-version-464000" in 1.684291666s
	I0520 08:24:27.310537    4770 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:27.310566    4770 fix.go:55] fixHost starting: 
	I0520 08:24:27.311196    4770 fix.go:103] recreateIfNeeded on old-k8s-version-464000: state=Stopped err=<nil>
	W0520 08:24:27.311233    4770 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:27.321023    4770 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-464000" ...
	I0520 08:24:27.331315    4770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d3:eb:62:84:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:27.341175    4770 main.go:141] libmachine: STDOUT: 
	I0520 08:24:27.341302    4770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:27.341424    4770 fix.go:57] fixHost completed within 30.862709ms
	I0520 08:24:27.341443    4770 start.go:83] releasing machines lock for "old-k8s-version-464000", held for 30.976083ms
	W0520 08:24:27.341477    4770 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:27.341811    4770 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:27.341826    4770 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:32.343896    4770 start.go:364] acquiring machines lock for old-k8s-version-464000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:32.344432    4770 start.go:368] acquired machines lock for "old-k8s-version-464000" in 461.291µs
	I0520 08:24:32.344551    4770 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:32.344569    4770 fix.go:55] fixHost starting: 
	I0520 08:24:32.345269    4770 fix.go:103] recreateIfNeeded on old-k8s-version-464000: state=Stopped err=<nil>
	W0520 08:24:32.345296    4770 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:32.350927    4770 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-464000" ...
	I0520 08:24:32.358102    4770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d3:eb:62:84:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/old-k8s-version-464000/disk.qcow2
	I0520 08:24:32.367352    4770 main.go:141] libmachine: STDOUT: 
	I0520 08:24:32.367429    4770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:32.367521    4770 fix.go:57] fixHost completed within 22.950917ms
	I0520 08:24:32.367542    4770 start.go:83] releasing machines lock for "old-k8s-version-464000", held for 23.090875ms
	W0520 08:24:32.368789    4770 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-464000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:32.384931    4770 out.go:177] 
	W0520 08:24:32.389227    4770 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:32.389257    4770 out.go:239] * 
	* 
	W0520 08:24:32.390836    4770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:32.397948    4770 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-464000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (44.719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-464000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (32.931708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-464000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-464000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-464000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.572834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-464000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-464000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (30.918625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-464000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-464000 "sudo crictl images -o json": exit status 89 (44.789708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-464000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-464000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-464000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (28.694875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-464000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-464000 --alsologtostderr -v=1: exit status 89 (42.91475ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-464000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:32.649659    4857 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:32.649999    4857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:32.650002    4857 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:32.650005    4857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:32.650077    4857 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:32.650254    4857 out.go:303] Setting JSON to false
	I0520 08:24:32.650263    4857 mustload.go:65] Loading cluster: old-k8s-version-464000
	I0520 08:24:32.650433    4857 config.go:182] Loaded profile config "old-k8s-version-464000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0520 08:24:32.654902    4857 out.go:177] * The control plane node must be running for this command
	I0520 08:24:32.661865    4857 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-464000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-464000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (28.4115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (27.718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-464000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.350939584s)

                                                
                                                
-- stdout --
	* [embed-certs-782000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-782000 in cluster embed-certs-782000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:33.141620    4883 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:33.141735    4883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:33.141738    4883 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:33.141740    4883 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:33.141817    4883 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:33.142805    4883 out.go:303] Setting JSON to false
	I0520 08:24:33.157801    4883 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1444,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:33.157870    4883 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:33.162797    4883 out.go:177] * [embed-certs-782000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:33.170795    4883 notify.go:220] Checking for updates...
	I0520 08:24:33.174755    4883 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:33.182731    4883 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:33.189712    4883 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:33.197773    4883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:33.205745    4883 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:33.213705    4883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:33.218132    4883 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:33.218199    4883 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:33.218217    4883 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:33.222606    4883 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:33.229755    4883 start.go:295] selected driver: qemu2
	I0520 08:24:33.229761    4883 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:33.229768    4883 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:33.231755    4883 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:33.236596    4883 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:33.240870    4883 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:33.240891    4883 cni.go:84] Creating CNI manager for ""
	I0520 08:24:33.240900    4883 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:33.240904    4883 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:24:33.240914    4883 start_flags.go:319] config:
	{Name:embed-certs-782000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:33.240983    4883 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:33.249792    4883 out.go:177] * Starting control plane node embed-certs-782000 in cluster embed-certs-782000
	I0520 08:24:33.253780    4883 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:33.253808    4883 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:33.253822    4883 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:33.253897    4883 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:33.253902    4883 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:33.253965    4883 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/embed-certs-782000/config.json ...
	I0520 08:24:33.253980    4883 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/embed-certs-782000/config.json: {Name:mk2605e1e0c37b01ccc641c520cb0e97fac42102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:33.254155    4883 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:33.254167    4883 start.go:364] acquiring machines lock for embed-certs-782000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:34.719687    4883 start.go:368] acquired machines lock for "embed-certs-782000" in 1.465429375s
	I0520 08:24:34.719838    4883 start.go:93] Provisioning new machine with config: &{Name:embed-certs-782000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:34.720133    4883 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:34.733306    4883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:34.773432    4883 start.go:159] libmachine.API.Create for "embed-certs-782000" (driver="qemu2")
	I0520 08:24:34.773488    4883 client.go:168] LocalClient.Create starting
	I0520 08:24:34.773605    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:34.773639    4883 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:34.773660    4883 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:34.773734    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:34.773758    4883 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:34.773770    4883 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:34.774306    4883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:34.901712    4883 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:35.035586    4883 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:35.035599    4883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:35.035778    4883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:35.044866    4883 main.go:141] libmachine: STDOUT: 
	I0520 08:24:35.044886    4883 main.go:141] libmachine: STDERR: 
	I0520 08:24:35.044940    4883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2 +20000M
	I0520 08:24:35.060927    4883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:35.060947    4883 main.go:141] libmachine: STDERR: 
	I0520 08:24:35.060961    4883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:35.060966    4883 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:35.061003    4883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ba:26:5f:58:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:35.062660    4883 main.go:141] libmachine: STDOUT: 
	I0520 08:24:35.062678    4883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:35.062696    4883 client.go:171] LocalClient.Create took 289.202333ms
	I0520 08:24:37.064907    4883 start.go:128] duration metric: createHost completed in 2.344742417s
	I0520 08:24:37.064991    4883 start.go:83] releasing machines lock for "embed-certs-782000", held for 2.34526875s
	W0520 08:24:37.065054    4883 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:37.083677    4883 out.go:177] * Deleting "embed-certs-782000" in qemu2 ...
	W0520 08:24:37.105957    4883 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:37.105986    4883 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:42.107738    4883 start.go:364] acquiring machines lock for embed-certs-782000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:42.119220    4883 start.go:368] acquired machines lock for "embed-certs-782000" in 11.411125ms
	I0520 08:24:42.119264    4883 start.go:93] Provisioning new machine with config: &{Name:embed-certs-782000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:42.119509    4883 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:42.127483    4883 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:42.170060    4883 start.go:159] libmachine.API.Create for "embed-certs-782000" (driver="qemu2")
	I0520 08:24:42.170103    4883 client.go:168] LocalClient.Create starting
	I0520 08:24:42.170237    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:42.170282    4883 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:42.170297    4883 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:42.170365    4883 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:42.170400    4883 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:42.170412    4883 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:42.170919    4883 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:42.293937    4883 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:42.397737    4883 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:42.397750    4883 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:42.397936    4883 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:42.415188    4883 main.go:141] libmachine: STDOUT: 
	I0520 08:24:42.415206    4883 main.go:141] libmachine: STDERR: 
	I0520 08:24:42.415267    4883 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2 +20000M
	I0520 08:24:42.423478    4883 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:42.423497    4883 main.go:141] libmachine: STDERR: 
	I0520 08:24:42.423510    4883 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:42.423517    4883 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:42.423549    4883 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:b7:2a:b8:52:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:42.425282    4883 main.go:141] libmachine: STDOUT: 
	I0520 08:24:42.425302    4883 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:42.425314    4883 client.go:171] LocalClient.Create took 255.207083ms
	I0520 08:24:44.427544    4883 start.go:128] duration metric: createHost completed in 2.307953542s
	I0520 08:24:44.427676    4883 start.go:83] releasing machines lock for "embed-certs-782000", held for 2.308430375s
	W0520 08:24:44.428168    4883 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:44.439820    4883 out.go:177] 
	W0520 08:24:44.444009    4883 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:44.444119    4883 out.go:239] * 
	* 
	W0520 08:24:44.446197    4883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:44.456799    4883 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (49.4395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-641000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-641000 create -f testdata/busybox.yaml: exit status 1 (31.29875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-641000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (32.966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (31.056625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-641000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system: exit status 1 (27.420042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-641000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (29.151792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.982080833s)

                                                
                                                
-- stdout --
	* [no-preload-641000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-641000 in cluster no-preload-641000
	* Restarting existing qemu2 VM for "no-preload-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-641000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:35.200446    4910 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:35.200555    4910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:35.200558    4910 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:35.200561    4910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:35.200628    4910 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:35.201611    4910 out.go:303] Setting JSON to false
	I0520 08:24:35.216573    4910 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1446,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:35.216651    4910 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:35.221911    4910 out.go:177] * [no-preload-641000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:35.228880    4910 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:35.228920    4910 notify.go:220] Checking for updates...
	I0520 08:24:35.235790    4910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:35.238887    4910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:35.241888    4910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:35.244854    4910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:35.247837    4910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:35.251127    4910 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:35.251358    4910 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:35.255796    4910 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:24:35.262851    4910 start.go:295] selected driver: qemu2
	I0520 08:24:35.262861    4910 start.go:870] validating driver "qemu2" against &{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-641000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:35.262932    4910 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:35.264787    4910 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:35.264810    4910 cni.go:84] Creating CNI manager for ""
	I0520 08:24:35.264818    4910 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:35.264825    4910 start_flags.go:319] config:
	{Name:no-preload-641000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-641000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:35.264894    4910 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.272803    4910 out.go:177] * Starting control plane node no-preload-641000 in cluster no-preload-641000
	I0520 08:24:35.276812    4910 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:35.276877    4910 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/no-preload-641000/config.json ...
	I0520 08:24:35.276911    4910 cache.go:107] acquiring lock: {Name:mk16b62ad5d32a020aeda5397f7794760ecffb2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.276933    4910 cache.go:107] acquiring lock: {Name:mk1c20507876e80963a4184c3ad3a7ef44077016 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.276924    4910 cache.go:107] acquiring lock: {Name:mk5caa061f40481417234c0803936d5dd5376e91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.276990    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 08:24:35.276996    4910 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.958µs
	I0520 08:24:35.277004    4910 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 08:24:35.277014    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0520 08:24:35.277023    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0520 08:24:35.277028    4910 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 118.291µs
	I0520 08:24:35.277035    4910 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0520 08:24:35.277025    4910 cache.go:107] acquiring lock: {Name:mk12f664196753926beb77b61e2096e57b9c03aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.277040    4910 cache.go:107] acquiring lock: {Name:mk9934d56a9d50e87ac2fbd4847ca5a59099f651 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.277026    4910 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 110.125µs
	I0520 08:24:35.277053    4910 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0520 08:24:35.277013    4910 cache.go:107] acquiring lock: {Name:mkef9ad2f4e7096c73a5f449ffd595a3adfcd9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.277095    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 08:24:35.277102    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0520 08:24:35.277102    4910 cache.go:107] acquiring lock: {Name:mk811e1b5b242888cb17f3435c4f4263f0384e14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.277107    4910 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 94.375µs
	I0520 08:24:35.277141    4910 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0520 08:24:35.277101    4910 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 61.5µs
	I0520 08:24:35.277157    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0520 08:24:35.277146    4910 cache.go:107] acquiring lock: {Name:mkde411108e981f49f11f9d8950d16804716b18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:35.277162    4910 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 61µs
	I0520 08:24:35.277167    4910 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0520 08:24:35.277158    4910 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 08:24:35.277180    4910 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:35.277193    4910 cache.go:115] /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0520 08:24:35.277197    4910 start.go:364] acquiring machines lock for no-preload-641000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:35.277197    4910 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 176.208µs
	I0520 08:24:35.277205    4910 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0520 08:24:35.277230    4910 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0520 08:24:35.284517    4910 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0520 08:24:36.310441    4910 cache.go:162] opening:  /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0520 08:24:37.065225    4910 start.go:368] acquired machines lock for "no-preload-641000" in 1.787966458s
	I0520 08:24:37.065346    4910 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:37.065362    4910 fix.go:55] fixHost starting: 
	I0520 08:24:37.065984    4910 fix.go:103] recreateIfNeeded on no-preload-641000: state=Stopped err=<nil>
	W0520 08:24:37.066020    4910 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:37.075758    4910 out.go:177] * Restarting existing qemu2 VM for "no-preload-641000" ...
	I0520 08:24:37.086863    4910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b6:2a:d8:2a:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:37.097235    4910 main.go:141] libmachine: STDOUT: 
	I0520 08:24:37.097454    4910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:37.097578    4910 fix.go:57] fixHost completed within 32.217125ms
	I0520 08:24:37.097602    4910 start.go:83] releasing machines lock for "no-preload-641000", held for 32.350917ms
	W0520 08:24:37.097632    4910 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:37.098056    4910 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:37.098071    4910 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:42.098336    4910 start.go:364] acquiring machines lock for no-preload-641000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:42.098799    4910 start.go:368] acquired machines lock for "no-preload-641000" in 380.292µs
	I0520 08:24:42.098939    4910 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:42.098961    4910 fix.go:55] fixHost starting: 
	I0520 08:24:42.099819    4910 fix.go:103] recreateIfNeeded on no-preload-641000: state=Stopped err=<nil>
	W0520 08:24:42.099844    4910 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:42.105653    4910 out.go:177] * Restarting existing qemu2 VM for "no-preload-641000" ...
	I0520 08:24:42.109557    4910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:b6:2a:d8:2a:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/no-preload-641000/disk.qcow2
	I0520 08:24:42.119024    4910 main.go:141] libmachine: STDOUT: 
	I0520 08:24:42.119068    4910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:42.119140    4910 fix.go:57] fixHost completed within 20.183ms
	I0520 08:24:42.119158    4910 start.go:83] releasing machines lock for "no-preload-641000", held for 20.337875ms
	W0520 08:24:42.119550    4910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-641000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:42.130504    4910 out.go:177] 
	W0520 08:24:42.134958    4910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:42.135008    4910 out.go:239] * 
	* 
	W0520 08:24:42.136767    4910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:42.146590    4910 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-641000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (48.113833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-641000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (34.273292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-641000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.104209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-641000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-641000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (31.508167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-641000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-641000 "sudo crictl images -o json": exit status 89 (40.051708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-641000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-641000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-641000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (28.702666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1: exit status 89 (43.100292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-641000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:42.399753    4943 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:42.399863    4943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:42.399869    4943 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:42.399871    4943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:42.399952    4943 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:42.400150    4943 out.go:303] Setting JSON to false
	I0520 08:24:42.400159    4943 mustload.go:65] Loading cluster: no-preload-641000
	I0520 08:24:42.400330    4943 config.go:182] Loaded profile config "no-preload-641000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:42.403607    4943 out.go:177] * The control plane node must be running for this command
	I0520 08:24:42.409565    4943 out.go:177]   To start a cluster, run: "minikube start -p no-preload-641000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-641000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (27.282125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (27.814208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-641000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.03852575s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-646000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-646000 in cluster default-k8s-diff-port-646000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:43.179258    4981 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:43.179387    4981 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:43.179390    4981 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:43.179393    4981 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:43.179464    4981 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:43.180545    4981 out.go:303] Setting JSON to false
	I0520 08:24:43.195499    4981 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1454,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:43.195565    4981 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:43.199900    4981 out.go:177] * [default-k8s-diff-port-646000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:43.206774    4981 notify.go:220] Checking for updates...
	I0520 08:24:43.210791    4981 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:43.214821    4981 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:43.217813    4981 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:43.220815    4981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:43.223876    4981 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:43.226811    4981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:43.230149    4981 config.go:182] Loaded profile config "embed-certs-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:43.230207    4981 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:43.230227    4981 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:43.234813    4981 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:43.241763    4981 start.go:295] selected driver: qemu2
	I0520 08:24:43.241774    4981 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:43.241781    4981 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:43.243709    4981 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:24:43.246819    4981 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:43.249832    4981 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:43.249852    4981 cni.go:84] Creating CNI manager for ""
	I0520 08:24:43.249862    4981 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:43.249866    4981 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:24:43.249873    4981 start_flags.go:319] config:
	{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0520 08:24:43.249960    4981 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:43.258610    4981 out.go:177] * Starting control plane node default-k8s-diff-port-646000 in cluster default-k8s-diff-port-646000
	I0520 08:24:43.262813    4981 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:43.262835    4981 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:43.262853    4981 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:43.262924    4981 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:43.262930    4981 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:43.262983    4981 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/default-k8s-diff-port-646000/config.json ...
	I0520 08:24:43.262995    4981 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/default-k8s-diff-port-646000/config.json: {Name:mk2514f4633fba5d9716432d564a7add808bd321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:43.263195    4981 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:43.263207    4981 start.go:364] acquiring machines lock for default-k8s-diff-port-646000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:44.427878    4981 start.go:368] acquired machines lock for "default-k8s-diff-port-646000" in 1.164606125s
	I0520 08:24:44.428083    4981 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:44.428361    4981 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:44.436814    4981 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:44.482203    4981 start.go:159] libmachine.API.Create for "default-k8s-diff-port-646000" (driver="qemu2")
	I0520 08:24:44.482256    4981 client.go:168] LocalClient.Create starting
	I0520 08:24:44.482365    4981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:44.482403    4981 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:44.482454    4981 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:44.482504    4981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:44.482531    4981 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:44.482543    4981 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:44.483108    4981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:44.618732    4981 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:44.794234    4981 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:44.794246    4981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:44.794409    4981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:44.803433    4981 main.go:141] libmachine: STDOUT: 
	I0520 08:24:44.803453    4981 main.go:141] libmachine: STDERR: 
	I0520 08:24:44.803522    4981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2 +20000M
	I0520 08:24:44.811685    4981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:44.811702    4981 main.go:141] libmachine: STDERR: 
	I0520 08:24:44.811720    4981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:44.811732    4981 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:44.811770    4981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:56:57:51:c9:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:44.813356    4981 main.go:141] libmachine: STDOUT: 
	I0520 08:24:44.813373    4981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:44.813396    4981 client.go:171] LocalClient.Create took 331.134959ms
	I0520 08:24:46.815585    4981 start.go:128] duration metric: createHost completed in 2.387190458s
	I0520 08:24:46.815656    4981 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 2.387742792s
	W0520 08:24:46.815772    4981 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:46.834368    4981 out.go:177] * Deleting "default-k8s-diff-port-646000" in qemu2 ...
	W0520 08:24:46.858085    4981 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:46.858117    4981 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:51.860304    4981 start.go:364] acquiring machines lock for default-k8s-diff-port-646000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:51.871551    4981 start.go:368] acquired machines lock for "default-k8s-diff-port-646000" in 11.171291ms
	I0520 08:24:51.871615    4981 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:51.871814    4981 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:51.876128    4981 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:51.917507    4981 start.go:159] libmachine.API.Create for "default-k8s-diff-port-646000" (driver="qemu2")
	I0520 08:24:51.917544    4981 client.go:168] LocalClient.Create starting
	I0520 08:24:51.917660    4981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:51.917698    4981 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:51.917720    4981 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:51.917814    4981 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:51.917846    4981 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:51.917858    4981 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:51.918334    4981 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:52.044652    4981 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:52.130893    4981 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:52.130902    4981 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:52.131106    4981 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:52.140239    4981 main.go:141] libmachine: STDOUT: 
	I0520 08:24:52.140261    4981 main.go:141] libmachine: STDERR: 
	I0520 08:24:52.140332    4981 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2 +20000M
	I0520 08:24:52.150984    4981 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:52.151000    4981 main.go:141] libmachine: STDERR: 
	I0520 08:24:52.151023    4981 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:52.151032    4981 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:52.151063    4981 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ee:30:b1:e8:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:52.152662    4981 main.go:141] libmachine: STDOUT: 
	I0520 08:24:52.152677    4981 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:52.152689    4981 client.go:171] LocalClient.Create took 235.140667ms
	I0520 08:24:54.154825    4981 start.go:128] duration metric: createHost completed in 2.282986125s
	I0520 08:24:54.154893    4981 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 2.283323458s
	W0520 08:24:54.155381    4981 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:54.165916    4981 out.go:177] 
	W0520 08:24:54.170258    4981 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:54.170281    4981 out.go:239] * 
	* 
	W0520 08:24:54.171819    4981 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:54.181979    4981 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (49.257666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-782000 create -f testdata/busybox.yaml: exit status 1 (31.702791ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-782000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (32.026459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (34.272166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-782000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-782000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-782000 describe deploy/metrics-server -n kube-system: exit status 1 (27.758375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-782000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (29.076125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (7.027454584s)

                                                
                                                
-- stdout --
	* [embed-certs-782000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-782000 in cluster embed-certs-782000
	* Restarting existing qemu2 VM for "embed-certs-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:44.908996    5008 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:44.909123    5008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:44.909126    5008 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:44.909128    5008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:44.909206    5008 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:44.910157    5008 out.go:303] Setting JSON to false
	I0520 08:24:44.925408    5008 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1455,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:44.925474    5008 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:44.933496    5008 out.go:177] * [embed-certs-782000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:44.937594    5008 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:44.937658    5008 notify.go:220] Checking for updates...
	I0520 08:24:44.944498    5008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:44.947505    5008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:44.950587    5008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:44.953514    5008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:44.956531    5008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:44.959789    5008 config.go:182] Loaded profile config "embed-certs-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:44.960013    5008 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:44.964426    5008 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:24:44.971535    5008 start.go:295] selected driver: qemu2
	I0520 08:24:44.971542    5008 start.go:870] validating driver "qemu2" against &{Name:embed-certs-782000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:44.971617    5008 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:44.973443    5008 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:44.973464    5008 cni.go:84] Creating CNI manager for ""
	I0520 08:24:44.973471    5008 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:44.973476    5008 start_flags.go:319] config:
	{Name:embed-certs-782000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-782000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:44.973537    5008 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:44.979544    5008 out.go:177] * Starting control plane node embed-certs-782000 in cluster embed-certs-782000
	I0520 08:24:44.983501    5008 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:44.983518    5008 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:44.983535    5008 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:44.983653    5008 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:44.983662    5008 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:44.983719    5008 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/embed-certs-782000/config.json ...
	I0520 08:24:44.984112    5008 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:44.984125    5008 start.go:364] acquiring machines lock for embed-certs-782000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:46.815824    5008 start.go:368] acquired machines lock for "embed-certs-782000" in 1.831675833s
	I0520 08:24:46.816000    5008 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:46.816032    5008 fix.go:55] fixHost starting: 
	I0520 08:24:46.816700    5008 fix.go:103] recreateIfNeeded on embed-certs-782000: state=Stopped err=<nil>
	W0520 08:24:46.816738    5008 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:46.826394    5008 out.go:177] * Restarting existing qemu2 VM for "embed-certs-782000" ...
	I0520 08:24:46.838718    5008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:b7:2a:b8:52:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:46.848981    5008 main.go:141] libmachine: STDOUT: 
	I0520 08:24:46.849037    5008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:46.849150    5008 fix.go:57] fixHost completed within 33.119959ms
	I0520 08:24:46.849171    5008 start.go:83] releasing machines lock for "embed-certs-782000", held for 33.318209ms
	W0520 08:24:46.849202    5008 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:46.849501    5008 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:46.849516    5008 start.go:702] Will try again in 5 seconds ...
	I0520 08:24:51.851738    5008 start.go:364] acquiring machines lock for embed-certs-782000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:51.852263    5008 start.go:368] acquired machines lock for "embed-certs-782000" in 413µs
	I0520 08:24:51.852426    5008 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:51.852452    5008 fix.go:55] fixHost starting: 
	I0520 08:24:51.853277    5008 fix.go:103] recreateIfNeeded on embed-certs-782000: state=Stopped err=<nil>
	W0520 08:24:51.853303    5008 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:51.858259    5008 out.go:177] * Restarting existing qemu2 VM for "embed-certs-782000" ...
	I0520 08:24:51.862342    5008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:b7:2a:b8:52:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/embed-certs-782000/disk.qcow2
	I0520 08:24:51.871339    5008 main.go:141] libmachine: STDOUT: 
	I0520 08:24:51.871382    5008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:51.871462    5008 fix.go:57] fixHost completed within 19.015542ms
	I0520 08:24:51.871481    5008 start.go:83] releasing machines lock for "embed-certs-782000", held for 19.176416ms
	W0520 08:24:51.871817    5008 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:51.883045    5008 out.go:177] 
	W0520 08:24:51.886297    5008 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:51.886314    5008 out.go:239] * 
	* 
	W0520 08:24:51.890517    5008 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:24:51.900204    5008 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-782000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (44.593833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-782000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (33.595167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-782000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.870833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (31.692834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-782000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-782000 "sudo crictl images -o json": exit status 89 (38.236916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-782000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-782000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-782000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (28.890375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-782000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-782000 --alsologtostderr -v=1: exit status 89 (42.784917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-782000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:52.145616    5027 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:52.145766    5027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:52.145769    5027 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:52.145771    5027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:52.145843    5027 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:52.146048    5027 out.go:303] Setting JSON to false
	I0520 08:24:52.146056    5027 mustload.go:65] Loading cluster: embed-certs-782000
	I0520 08:24:52.146222    5027 config.go:182] Loaded profile config "embed-certs-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:52.150140    5027 out.go:177] * The control plane node must be running for this command
	I0520 08:24:52.156127    5027 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-782000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-782000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (27.530584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (28.007208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.197160292s)

                                                
                                                
-- stdout --
	* [newest-cni-700000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-700000 in cluster newest-cni-700000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-700000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:52.646855    5053 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:52.646999    5053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:52.647001    5053 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:52.647004    5053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:52.647072    5053 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:52.648070    5053 out.go:303] Setting JSON to false
	I0520 08:24:52.663245    5053 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1463,"bootTime":1684594829,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:52.663318    5053 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:52.666875    5053 out.go:177] * [newest-cni-700000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:52.672887    5053 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:52.672938    5053 notify.go:220] Checking for updates...
	I0520 08:24:52.679893    5053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:52.681362    5053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:52.684840    5053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:52.687911    5053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:52.690901    5053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:52.694201    5053 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:52.694258    5053 config.go:182] Loaded profile config "multinode-046000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:52.694277    5053 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:52.698857    5053 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 08:24:52.705814    5053 start.go:295] selected driver: qemu2
	I0520 08:24:52.705823    5053 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:24:52.705830    5053 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:52.707594    5053 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0520 08:24:52.707617    5053 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0520 08:24:52.715884    5053 out.go:177] * Automatically selected the socket_vmnet network
	I0520 08:24:52.718985    5053 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 08:24:52.719004    5053 cni.go:84] Creating CNI manager for ""
	I0520 08:24:52.719019    5053 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:52.719022    5053 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 08:24:52.719028    5053 start_flags.go:319] config:
	{Name:newest-cni-700000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-700000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:52.719112    5053 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:52.727831    5053 out.go:177] * Starting control plane node newest-cni-700000 in cluster newest-cni-700000
	I0520 08:24:52.731845    5053 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:52.731869    5053 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:52.731885    5053 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:52.731943    5053 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:52.731949    5053 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:52.732017    5053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/newest-cni-700000/config.json ...
	I0520 08:24:52.732029    5053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/newest-cni-700000/config.json: {Name:mk96ad38491aacbeb3a2b502251b81af4bccdced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:24:52.732230    5053 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:52.732242    5053 start.go:364] acquiring machines lock for newest-cni-700000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:54.155053    5053 start.go:368] acquired machines lock for "newest-cni-700000" in 1.422775958s
	I0520 08:24:54.155197    5053 start.go:93] Provisioning new machine with config: &{Name:newest-cni-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-700000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:24:54.155367    5053 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:24:54.162991    5053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:24:54.204055    5053 start.go:159] libmachine.API.Create for "newest-cni-700000" (driver="qemu2")
	I0520 08:24:54.204098    5053 client.go:168] LocalClient.Create starting
	I0520 08:24:54.204212    5053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:24:54.204253    5053 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:54.204281    5053 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:54.204337    5053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:24:54.204368    5053 main.go:141] libmachine: Decoding PEM data...
	I0520 08:24:54.204386    5053 main.go:141] libmachine: Parsing certificate...
	I0520 08:24:54.204973    5053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:24:54.329390    5053 main.go:141] libmachine: Creating SSH key...
	I0520 08:24:54.393250    5053 main.go:141] libmachine: Creating Disk image...
	I0520 08:24:54.393259    5053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:24:54.393410    5053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:24:54.402274    5053 main.go:141] libmachine: STDOUT: 
	I0520 08:24:54.402296    5053 main.go:141] libmachine: STDERR: 
	I0520 08:24:54.402361    5053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2 +20000M
	I0520 08:24:54.410522    5053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:24:54.410546    5053 main.go:141] libmachine: STDERR: 
	I0520 08:24:54.410569    5053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:24:54.410577    5053 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:24:54.410618    5053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:00:8b:f0:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:24:54.412335    5053 main.go:141] libmachine: STDOUT: 
	I0520 08:24:54.412350    5053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:54.412370    5053 client.go:171] LocalClient.Create took 208.265542ms
	I0520 08:24:56.414600    5053 start.go:128] duration metric: createHost completed in 2.259177625s
	I0520 08:24:56.414673    5053 start.go:83] releasing machines lock for "newest-cni-700000", held for 2.259585s
	W0520 08:24:56.414726    5053 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:56.427600    5053 out.go:177] * Deleting "newest-cni-700000" in qemu2 ...
	W0520 08:24:56.450703    5053 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:56.450739    5053 start.go:702] Will try again in 5 seconds ...
	I0520 08:25:01.452883    5053 start.go:364] acquiring machines lock for newest-cni-700000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:25:01.468410    5053 start.go:368] acquired machines lock for "newest-cni-700000" in 15.428375ms
	I0520 08:25:01.468457    5053 start.go:93] Provisioning new machine with config: &{Name:newest-cni-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-700000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 08:25:01.468706    5053 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 08:25:01.479890    5053 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 08:25:01.524178    5053 start.go:159] libmachine.API.Create for "newest-cni-700000" (driver="qemu2")
	I0520 08:25:01.524218    5053 client.go:168] LocalClient.Create starting
	I0520 08:25:01.524340    5053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/ca.pem
	I0520 08:25:01.524379    5053 main.go:141] libmachine: Decoding PEM data...
	I0520 08:25:01.524395    5053 main.go:141] libmachine: Parsing certificate...
	I0520 08:25:01.524477    5053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16543-1012/.minikube/certs/cert.pem
	I0520 08:25:01.524504    5053 main.go:141] libmachine: Decoding PEM data...
	I0520 08:25:01.524516    5053 main.go:141] libmachine: Parsing certificate...
	I0520 08:25:01.525027    5053 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso...
	I0520 08:25:01.650915    5053 main.go:141] libmachine: Creating SSH key...
	I0520 08:25:01.747730    5053 main.go:141] libmachine: Creating Disk image...
	I0520 08:25:01.747739    5053 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 08:25:01.747912    5053 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:25:01.766746    5053 main.go:141] libmachine: STDOUT: 
	I0520 08:25:01.766763    5053 main.go:141] libmachine: STDERR: 
	I0520 08:25:01.766826    5053 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2 +20000M
	I0520 08:25:01.774776    5053 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 08:25:01.774792    5053 main.go:141] libmachine: STDERR: 
	I0520 08:25:01.774820    5053 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:25:01.774834    5053 main.go:141] libmachine: Starting QEMU VM...
	I0520 08:25:01.774871    5053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:6d:2f:4c:cf:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:25:01.776567    5053 main.go:141] libmachine: STDOUT: 
	I0520 08:25:01.776581    5053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:25:01.776593    5053 client.go:171] LocalClient.Create took 252.371416ms
	I0520 08:25:03.778855    5053 start.go:128] duration metric: createHost completed in 2.310114708s
	I0520 08:25:03.778923    5053 start.go:83] releasing machines lock for "newest-cni-700000", held for 2.310492625s
	W0520 08:25:03.779681    5053 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:25:03.788777    5053 out.go:177] 
	W0520 08:25:03.792982    5053 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:25:03.793015    5053 out.go:239] * 
	* 
	W0520 08:25:03.795673    5053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:25:03.804808    5053 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (67.950375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml: exit status 1 (30.592875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-646000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (33.119875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (31.408792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-646000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system: exit status 1 (27.337417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-646000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (28.275209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.905280875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-646000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-646000 in cluster default-k8s-diff-port-646000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:24:54.627974    5083 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:24:54.628087    5083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:54.628089    5083 out.go:309] Setting ErrFile to fd 2...
	I0520 08:24:54.628092    5083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:24:54.628172    5083 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:24:54.629141    5083 out.go:303] Setting JSON to false
	I0520 08:24:54.643993    5083 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1465,"bootTime":1684594829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:24:54.644050    5083 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:24:54.648062    5083 out.go:177] * [default-k8s-diff-port-646000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:24:54.650922    5083 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:24:54.650974    5083 notify.go:220] Checking for updates...
	I0520 08:24:54.657960    5083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:24:54.660926    5083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:24:54.664930    5083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:24:54.668932    5083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:24:54.671904    5083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:24:54.675618    5083 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:24:54.676106    5083 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:24:54.682920    5083 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:24:54.685931    5083 start.go:295] selected driver: qemu2
	I0520 08:24:54.685938    5083 start.go:870] validating driver "qemu2" against &{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-646000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:54.685998    5083 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:24:54.687894    5083 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 08:24:54.687918    5083 cni.go:84] Creating CNI manager for ""
	I0520 08:24:54.687925    5083 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:24:54.687929    5083 start_flags.go:319] config:
	{Name:default-k8s-diff-port-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-6460
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:24:54.687994    5083 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:24:54.696890    5083 out.go:177] * Starting control plane node default-k8s-diff-port-646000 in cluster default-k8s-diff-port-646000
	I0520 08:24:54.700923    5083 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:24:54.700941    5083 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:24:54.700956    5083 cache.go:57] Caching tarball of preloaded images
	I0520 08:24:54.701021    5083 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:24:54.701026    5083 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:24:54.701091    5083 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/default-k8s-diff-port-646000/config.json ...
	I0520 08:24:54.701393    5083 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:24:54.701404    5083 start.go:364] acquiring machines lock for default-k8s-diff-port-646000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:24:56.414833    5083 start.go:368] acquired machines lock for "default-k8s-diff-port-646000" in 1.713368125s
	I0520 08:24:56.414968    5083 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:24:56.414998    5083 fix.go:55] fixHost starting: 
	I0520 08:24:56.415736    5083 fix.go:103] recreateIfNeeded on default-k8s-diff-port-646000: state=Stopped err=<nil>
	W0520 08:24:56.415777    5083 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:24:56.424470    5083 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	I0520 08:24:56.431637    5083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ee:30:b1:e8:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:24:56.442277    5083 main.go:141] libmachine: STDOUT: 
	I0520 08:24:56.442343    5083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:24:56.442478    5083 fix.go:57] fixHost completed within 27.479042ms
	I0520 08:24:56.442500    5083 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 27.621833ms
	W0520 08:24:56.442537    5083 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:24:56.442961    5083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:24:56.442980    5083 start.go:702] Will try again in 5 seconds ...
	I0520 08:25:01.445263    5083 start.go:364] acquiring machines lock for default-k8s-diff-port-646000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:25:01.445969    5083 start.go:368] acquired machines lock for "default-k8s-diff-port-646000" in 562.375µs
	I0520 08:25:01.446170    5083 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:25:01.446191    5083 fix.go:55] fixHost starting: 
	I0520 08:25:01.447052    5083 fix.go:103] recreateIfNeeded on default-k8s-diff-port-646000: state=Stopped err=<nil>
	W0520 08:25:01.447078    5083 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:25:01.455943    5083 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-646000" ...
	I0520 08:25:01.459135    5083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ee:30:b1:e8:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/default-k8s-diff-port-646000/disk.qcow2
	I0520 08:25:01.468187    5083 main.go:141] libmachine: STDOUT: 
	I0520 08:25:01.468242    5083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:25:01.468319    5083 fix.go:57] fixHost completed within 22.129375ms
	I0520 08:25:01.468337    5083 start.go:83] releasing machines lock for "default-k8s-diff-port-646000", held for 22.325458ms
	W0520 08:25:01.468645    5083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:25:01.479891    5083 out.go:177] 
	W0520 08:25:01.484166    5083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:25:01.484225    5083 out.go:239] * 
	* 
	W0520 08:25:01.486490    5083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:25:01.497959    5083 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-646000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (49.8315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-646000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (33.471667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-646000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.1505ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (32.318042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-646000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-646000 "sudo crictl images -o json": exit status 89 (41.400834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-646000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (28.512667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1: exit status 89 (45.079917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:25:01.752832    5102 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:25:01.752958    5102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:01.752961    5102 out.go:309] Setting ErrFile to fd 2...
	I0520 08:25:01.752963    5102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:01.753029    5102 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:25:01.753218    5102 out.go:303] Setting JSON to false
	I0520 08:25:01.753227    5102 mustload.go:65] Loading cluster: default-k8s-diff-port-646000
	I0520 08:25:01.753396    5102 config.go:182] Loaded profile config "default-k8s-diff-port-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:25:01.757924    5102 out.go:177] * The control plane node must be running for this command
	I0520 08:25:01.765912    5102 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-646000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-646000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (28.050959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (28.209291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.1840095s)

                                                
                                                
-- stdout --
	* [newest-cni-700000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-700000 in cluster newest-cni-700000
	* Restarting existing qemu2 VM for "newest-cni-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-700000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:25:04.125989    5140 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:25:04.126101    5140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:04.126104    5140 out.go:309] Setting ErrFile to fd 2...
	I0520 08:25:04.126107    5140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:04.126187    5140 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:25:04.127129    5140 out.go:303] Setting JSON to false
	I0520 08:25:04.142377    5140 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1475,"bootTime":1684594829,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:25:04.142439    5140 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:25:04.147374    5140 out.go:177] * [newest-cni-700000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:25:04.154328    5140 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:25:04.154329    5140 notify.go:220] Checking for updates...
	I0520 08:25:04.162280    5140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:25:04.165317    5140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:25:04.168310    5140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:25:04.171299    5140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:25:04.174314    5140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:25:04.177562    5140 config.go:182] Loaded profile config "newest-cni-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:25:04.177778    5140 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:25:04.182362    5140 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:25:04.194275    5140 start.go:295] selected driver: qemu2
	I0520 08:25:04.194282    5140 start.go:870] validating driver "qemu2" against &{Name:newest-cni-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-700000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:25:04.194331    5140 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:25:04.196285    5140 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 08:25:04.196306    5140 cni.go:84] Creating CNI manager for ""
	I0520 08:25:04.196314    5140 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:25:04.196322    5140 start_flags.go:319] config:
	{Name:newest-cni-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-700000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:25:04.196393    5140 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:25:04.203288    5140 out.go:177] * Starting control plane node newest-cni-700000 in cluster newest-cni-700000
	I0520 08:25:04.207329    5140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:25:04.207352    5140 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:25:04.207373    5140 cache.go:57] Caching tarball of preloaded images
	I0520 08:25:04.207436    5140 preload.go:174] Found /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 08:25:04.207441    5140 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:25:04.207501    5140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/newest-cni-700000/config.json ...
	I0520 08:25:04.207873    5140 cache.go:195] Successfully downloaded all kic artifacts
	I0520 08:25:04.207882    5140 start.go:364] acquiring machines lock for newest-cni-700000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:25:04.207918    5140 start.go:368] acquired machines lock for "newest-cni-700000" in 29.875µs
	I0520 08:25:04.207929    5140 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:25:04.207933    5140 fix.go:55] fixHost starting: 
	I0520 08:25:04.208062    5140 fix.go:103] recreateIfNeeded on newest-cni-700000: state=Stopped err=<nil>
	W0520 08:25:04.208070    5140 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:25:04.212338    5140 out.go:177] * Restarting existing qemu2 VM for "newest-cni-700000" ...
	I0520 08:25:04.220305    5140 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:6d:2f:4c:cf:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:25:04.222220    5140 main.go:141] libmachine: STDOUT: 
	I0520 08:25:04.222237    5140 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:25:04.222265    5140 fix.go:57] fixHost completed within 14.333208ms
	I0520 08:25:04.222271    5140 start.go:83] releasing machines lock for "newest-cni-700000", held for 14.348917ms
	W0520 08:25:04.222277    5140 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:25:04.222332    5140 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:25:04.222339    5140 start.go:702] Will try again in 5 seconds ...
	I0520 08:25:09.224569    5140 start.go:364] acquiring machines lock for newest-cni-700000: {Name:mk270c257f49033ffda3b70fea9b877c78314aa8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 08:25:09.224944    5140 start.go:368] acquired machines lock for "newest-cni-700000" in 280.209µs
	I0520 08:25:09.225081    5140 start.go:96] Skipping create...Using existing machine configuration
	I0520 08:25:09.225103    5140 fix.go:55] fixHost starting: 
	I0520 08:25:09.225836    5140 fix.go:103] recreateIfNeeded on newest-cni-700000: state=Stopped err=<nil>
	W0520 08:25:09.225863    5140 fix.go:129] unexpected machine state, will restart: <nil>
	I0520 08:25:09.235660    5140 out.go:177] * Restarting existing qemu2 VM for "newest-cni-700000" ...
	I0520 08:25:09.239964    5140 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:6d:2f:4c:cf:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16543-1012/.minikube/machines/newest-cni-700000/disk.qcow2
	I0520 08:25:09.249319    5140 main.go:141] libmachine: STDOUT: 
	I0520 08:25:09.249371    5140 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 08:25:09.249449    5140 fix.go:57] fixHost completed within 24.350083ms
	I0520 08:25:09.249467    5140 start.go:83] releasing machines lock for "newest-cni-700000", held for 24.502375ms
	W0520 08:25:09.249812    5140 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-700000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 08:25:09.256702    5140 out.go:177] 
	W0520 08:25:09.259940    5140 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 08:25:09.259969    5140 out.go:239] * 
	* 
	W0520 08:25:09.262746    5140 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:25:09.269653    5140 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-700000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (68.584959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-700000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-700000 "sudo crictl images -o json": exit status 89 (42.118584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-700000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-700000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-700000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (28.900166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-700000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-700000 --alsologtostderr -v=1: exit status 89 (39.781833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-700000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:25:09.450978    5153 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:25:09.451113    5153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:09.451120    5153 out.go:309] Setting ErrFile to fd 2...
	I0520 08:25:09.451123    5153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:25:09.451200    5153 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:25:09.451417    5153 out.go:303] Setting JSON to false
	I0520 08:25:09.451424    5153 mustload.go:65] Loading cluster: newest-cni-700000
	I0520 08:25:09.451589    5153 config.go:182] Loaded profile config "newest-cni-700000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:25:09.456092    5153 out.go:177] * The control plane node must be running for this command
	I0520 08:25:09.459198    5153 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-700000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-700000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (28.994958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-700000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (28.786333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-700000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/242)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.27.2/json-events 16.54
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.28
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.27
19 TestBinaryMirror 0.38
29 TestHyperKitDriverInstallOrUpdate 8.2
32 TestErrorSpam/setup 29.71
33 TestErrorSpam/start 0.34
34 TestErrorSpam/status 0.24
35 TestErrorSpam/pause 0.67
36 TestErrorSpam/unpause 0.6
37 TestErrorSpam/stop 12.25
40 TestFunctional/serial/CopySyncFile 0
41 TestFunctional/serial/StartWithProxy 47.02
42 TestFunctional/serial/AuditLog 0
43 TestFunctional/serial/SoftStart 52.11
44 TestFunctional/serial/KubeContext 0.03
45 TestFunctional/serial/KubectlGetPods 0.06
48 TestFunctional/serial/CacheCmd/cache/add_remote 6.15
49 TestFunctional/serial/CacheCmd/cache/add_local 1.2
50 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
51 TestFunctional/serial/CacheCmd/cache/list 0.03
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
53 TestFunctional/serial/CacheCmd/cache/cache_reload 1.34
54 TestFunctional/serial/CacheCmd/cache/delete 0.07
55 TestFunctional/serial/MinikubeKubectlCmd 0.46
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
57 TestFunctional/serial/ExtraConfig 41.64
58 TestFunctional/serial/ComponentHealth 0.05
59 TestFunctional/serial/LogsCmd 0.71
60 TestFunctional/serial/LogsFileCmd 0.68
62 TestFunctional/parallel/ConfigCmd 0.19
63 TestFunctional/parallel/DashboardCmd 9.34
64 TestFunctional/parallel/DryRun 0.29
65 TestFunctional/parallel/InternationalLanguage 0.13
66 TestFunctional/parallel/StatusCmd 0.26
71 TestFunctional/parallel/AddonsCmd 0.17
72 TestFunctional/parallel/PersistentVolumeClaim 23.15
74 TestFunctional/parallel/SSHCmd 0.15
75 TestFunctional/parallel/CpCmd 0.27
77 TestFunctional/parallel/FileSync 0.07
78 TestFunctional/parallel/CertSync 0.42
82 TestFunctional/parallel/NodeLabels 0.05
84 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
86 TestFunctional/parallel/License 0.64
87 TestFunctional/parallel/Version/short 0.04
88 TestFunctional/parallel/Version/components 0.24
89 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
90 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
91 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
92 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
93 TestFunctional/parallel/ImageCommands/ImageBuild 3.17
94 TestFunctional/parallel/ImageCommands/Setup 2.42
95 TestFunctional/parallel/DockerEnv/bash 0.42
96 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
97 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
99 TestFunctional/parallel/ServiceCmd/DeployApp 13.11
100 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.15
101 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
102 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.2
103 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
104 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
105 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
106 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.68
108 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
112 TestFunctional/parallel/ServiceCmd/List 0.12
113 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
114 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
115 TestFunctional/parallel/ServiceCmd/Format 0.1
116 TestFunctional/parallel/ServiceCmd/URL 0.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.2
124 TestFunctional/parallel/ProfileCmd/profile_list 0.15
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
126 TestFunctional/parallel/MountCmd/any-port 6.06
127 TestFunctional/parallel/MountCmd/specific-port 1.15
129 TestFunctional/delete_addon-resizer_images 0.19
130 TestFunctional/delete_my-image_image 0.04
131 TestFunctional/delete_minikube_cached_images 0.04
135 TestImageBuild/serial/Setup 30
136 TestImageBuild/serial/NormalBuild 2.2
138 TestImageBuild/serial/BuildWithDockerIgnore 0.16
139 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
142 TestIngressAddonLegacy/StartLegacyK8sCluster 81.48
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.82
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.2
149 TestJSONOutput/start/Command 72.69
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.32
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.34
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 12.16
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.36
177 TestMainNoArgs 0.03
178 TestMinikubeProfile 61.55
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
238 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
239 TestNoKubernetes/serial/ProfileList 0.16
240 TestNoKubernetes/serial/Stop 0.06
242 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
260 TestStartStop/group/old-k8s-version/serial/Stop 0.06
261 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
271 TestStartStop/group/no-preload/serial/Stop 0.07
272 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
282 TestStartStop/group/embed-certs/serial/Stop 0.06
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
302 TestStartStop/group/newest-cni/serial/Stop 0.06
303 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
305 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-819000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-819000: exit status 85 (95.766125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-819000 | jenkins | v1.30.1 | 20 May 23 08:03 PDT |          |
	|         | -p download-only-819000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:03:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:03:43.095340    1442 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:03:43.095468    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:03:43.095471    1442 out.go:309] Setting ErrFile to fd 2...
	I0520 08:03:43.095473    1442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:03:43.095576    1442 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	W0520 08:03:43.095700    1442 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: no such file or directory
	I0520 08:03:43.096871    1442 out.go:303] Setting JSON to true
	I0520 08:03:43.113839    1442 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":194,"bootTime":1684594829,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:03:43.113901    1442 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:03:43.119826    1442 out.go:97] [download-only-819000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:03:43.119967    1442 notify.go:220] Checking for updates...
	I0520 08:03:43.122768    1442 out.go:169] MINIKUBE_LOCATION=16543
	W0520 08:03:43.120062    1442 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 08:03:43.131853    1442 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:03:43.135793    1442 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:03:43.138802    1442 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:03:43.141847    1442 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	W0520 08:03:43.147820    1442 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 08:03:43.148040    1442 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:03:43.152875    1442 out.go:97] Using the qemu2 driver based on user configuration
	I0520 08:03:43.152898    1442 start.go:295] selected driver: qemu2
	I0520 08:03:43.152913    1442 start.go:870] validating driver "qemu2" against <nil>
	I0520 08:03:43.152960    1442 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0520 08:03:43.156798    1442 out.go:169] Automatically selected the socket_vmnet network
	I0520 08:03:43.162273    1442 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 08:03:43.162343    1442 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 08:03:43.162367    1442 cni.go:84] Creating CNI manager for ""
	I0520 08:03:43.162383    1442 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0520 08:03:43.162388    1442 start_flags.go:319] config:
	{Name:download-only-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:03:43.162551    1442 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:03:43.166848    1442 out.go:97] Downloading VM boot image ...
	I0520 08:03:43.166864    1442 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/iso/arm64/minikube-v1.30.1-1684174510-16506-arm64.iso
	I0520 08:03:58.666525    1442 out.go:97] Starting control plane node download-only-819000 in cluster download-only-819000
	I0520 08:03:58.666550    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:03:58.783564    1442 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:03:58.783651    1442 cache.go:57] Caching tarball of preloaded images
	I0520 08:03:58.783907    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:03:58.790013    1442 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0520 08:03:58.790024    1442 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:03:59.004399    1442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0520 08:04:17.309003    1442 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:17.309128    1442 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:17.955488    1442 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0520 08:04:17.955663    1442 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/download-only-819000/config.json ...
	I0520 08:04:17.955687    1442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/download-only-819000/config.json: {Name:mkac2b33f86d40d978ccd26f5df89d7fe7d6cc30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 08:04:17.955943    1442 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0520 08:04:17.956121    1442 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0520 08:04:18.678505    1442 out.go:169] 
	W0520 08:04:18.683611    1442 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8 0x107b9e1b8] Decompressors:map[bz2:0x14000056a28 gz:0x14000056a80 tar:0x14000056a30 tar.bz2:0x14000056a40 tar.gz:0x14000056a50 tar.xz:0x14000056a60 tar.zst:0x14000056a70 tbz2:0x14000056a40 tgz:0x14000056a50 txz:0x14000056a60 tzst:0x14000056a70 xz:0x14000056a88 zip:0x14000056a90 zst:0x14000056aa0] Getters:map[file:0x140005a2c70 http:0x140009d6190 https:0x140009d61e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 08:04:18.683645    1442 out_reason.go:110] 
	W0520 08:04:18.691481    1442 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 08:04:18.696487    1442 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-819000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (16.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 : (16.536857833s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (16.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-819000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-819000: exit status 85 (78.089834ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-819000 | jenkins | v1.30.1 | 20 May 23 08:03 PDT |          |
	|         | -p download-only-819000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-819000 | jenkins | v1.30.1 | 20 May 23 08:04 PDT |          |
	|         | -p download-only-819000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/20 08:04:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 08:04:18.886772    1474 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:04:18.886921    1474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:04:18.886924    1474 out.go:309] Setting ErrFile to fd 2...
	I0520 08:04:18.886928    1474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:04:18.886991    1474 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	W0520 08:04:18.887052    1474 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16543-1012/.minikube/config/config.json: no such file or directory
	I0520 08:04:18.887920    1474 out.go:303] Setting JSON to true
	I0520 08:04:18.902900    1474 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":229,"bootTime":1684594829,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:04:18.902963    1474 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:04:18.907797    1474 out.go:97] [download-only-819000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:04:18.911813    1474 out.go:169] MINIKUBE_LOCATION=16543
	I0520 08:04:18.907913    1474 notify.go:220] Checking for updates...
	I0520 08:04:18.917819    1474 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:04:18.921825    1474 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:04:18.924800    1474 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:04:18.927853    1474 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	W0520 08:04:18.933739    1474 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 08:04:18.934010    1474 config.go:182] Loaded profile config "download-only-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0520 08:04:18.934054    1474 start.go:778] api.Load failed for download-only-819000: filestore "download-only-819000": Docker machine "download-only-819000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0520 08:04:18.934076    1474 driver.go:375] Setting default libvirt URI to qemu:///system
	W0520 08:04:18.934088    1474 start.go:778] api.Load failed for download-only-819000: filestore "download-only-819000": Docker machine "download-only-819000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0520 08:04:18.936748    1474 out.go:97] Using the qemu2 driver based on existing profile
	I0520 08:04:18.936754    1474 start.go:295] selected driver: qemu2
	I0520 08:04:18.936756    1474 start.go:870] validating driver "qemu2" against &{Name:download-only-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:04:18.938549    1474 cni.go:84] Creating CNI manager for ""
	I0520 08:04:18.938565    1474 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 08:04:18.938572    1474 start_flags.go:319] config:
	{Name:download-only-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-819000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:04:18.938651    1474 iso.go:125] acquiring lock: {Name:mkf05a14b47e9b67445252fbb4917f3bcb37f054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 08:04:18.941800    1474 out.go:97] Starting control plane node download-only-819000 in cluster download-only-819000
	I0520 08:04:18.941806    1474 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:19.162048    1474 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:04:19.162125    1474 cache.go:57] Caching tarball of preloaded images
	I0520 08:04:19.163082    1474 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:19.168943    1474 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0520 08:04:19.168987    1474 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:19.380465    1474 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4?checksum=md5:4271952d77a401a4cbcfc4225771d46f -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0520 08:04:31.208997    1474 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:31.209150    1474 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0520 08:04:31.771279    1474 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0520 08:04:31.771346    1474 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/download-only-819000/config.json ...
	I0520 08:04:31.771638    1474 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0520 08:04:31.771798    1474 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16543-1012/.minikube/cache/darwin/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-819000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-819000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-652000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-652000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-652000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.2s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0520 08:18:39.744480    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (8.20s)

                                                
                                    
x
+
TestErrorSpam/setup (29.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-532000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-532000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 --driver=qemu2 : (29.711285792s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (29.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (12.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 stop: (12.0869855s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-532000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-532000 stop
--- PASS: TestErrorSpam/stop (12.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16543-1012/.minikube/files/etc/test/nested/copy/1437/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-arm64 start -p functional-537000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.024001375s)
--- PASS: TestFunctional/serial/StartWithProxy (47.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-arm64 start -p functional-537000 --alsologtostderr -v=8: (52.1066565s)
functional_test.go:658: soft start took 52.107066917s for "functional-537000" cluster.
--- PASS: TestFunctional/serial/SoftStart (52.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-537000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:3.1: (2.16722125s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:3.3: (2.262373417s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 cache add registry.k8s.io/pause:latest: (1.720180042s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2419187496/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache add minikube-local-cache-test:functional-537000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache delete minikube-local-cache-test:functional-537000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-537000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.679292ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 cache reload: (1.11635825s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 kubectl -- --context functional-537000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-537000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-arm64 start -p functional-537000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.642252084s)
functional_test.go:756: restart took 41.642379292s for "functional-537000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-537000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd4031827409/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 config get cpus: exit status 14 (28.232375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 config get cpus: exit status 14 (26.992208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-537000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-537000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2302: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-537000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.510083ms)

                                                
                                                
-- stdout --
	* [functional-537000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:09:37.296899    2284 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:09:37.297075    2284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.297078    2284 out.go:309] Setting ErrFile to fd 2...
	I0520 08:09:37.297081    2284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.297162    2284 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:09:37.298401    2284 out.go:303] Setting JSON to false
	I0520 08:09:37.316130    2284 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":548,"bootTime":1684594829,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:09:37.316210    2284 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:09:37.323975    2284 out.go:177] * [functional-537000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0520 08:09:37.336854    2284 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:09:37.331040    2284 notify.go:220] Checking for updates...
	I0520 08:09:37.342934    2284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:09:37.349924    2284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:09:37.356879    2284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:09:37.364900    2284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:09:37.372903    2284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:09:37.377225    2284 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:09:37.377509    2284 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:09:37.381931    2284 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 08:09:37.389931    2284 start.go:295] selected driver: qemu2
	I0520 08:09:37.389938    2284 start.go:870] validating driver "qemu2" against &{Name:functional-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:09:37.390010    2284 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:09:37.396856    2284 out.go:177] 
	W0520 08:09:37.401043    2284 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 08:09:37.404779    2284 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-537000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-537000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (124.898625ms)

                                                
                                                
-- stdout --
	* [functional-537000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 08:09:37.565187    2296 out.go:296] Setting OutFile to fd 1 ...
	I0520 08:09:37.565285    2296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.565288    2296 out.go:309] Setting ErrFile to fd 2...
	I0520 08:09:37.565290    2296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0520 08:09:37.565388    2296 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
	I0520 08:09:37.566579    2296 out.go:303] Setting JSON to false
	I0520 08:09:37.582881    2296 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":548,"bootTime":1684594829,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 08:09:37.582968    2296 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0520 08:09:37.587991    2296 out.go:177] * [functional-537000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	I0520 08:09:37.597986    2296 out.go:177]   - MINIKUBE_LOCATION=16543
	I0520 08:09:37.595135    2296 notify.go:220] Checking for updates...
	I0520 08:09:37.605957    2296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	I0520 08:09:37.610897    2296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 08:09:37.617888    2296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 08:09:37.625941    2296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	I0520 08:09:37.631905    2296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 08:09:37.633869    2296 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0520 08:09:37.634088    2296 driver.go:375] Setting default libvirt URI to qemu:///system
	I0520 08:09:37.637941    2296 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0520 08:09:37.643918    2296 start.go:295] selected driver: qemu2
	I0520 08:09:37.643923    2296 start.go:870] validating driver "qemu2" against &{Name:functional-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16506/minikube-v1.30.1-1684174510-16506-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684523789-16533@sha256:ed200ff6d686f303885e8aaf964442d08018856d63a8e23f7acdc068766ea82b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-537000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0520 08:09:37.643972    2296 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 08:09:37.649962    2296 out.go:177] 
	W0520 08:09:37.653933    2296 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 08:09:37.657991    2296 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4eed99d3-264f-48bc-b062-efcc90544711] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010960584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-537000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-537000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-537000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-537000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [da2e3b45-e792-43f2-9b21-0aab8050f728] Pending
helpers_test.go:344: "sp-pod" [da2e3b45-e792-43f2-9b21-0aab8050f728] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [da2e3b45-e792-43f2-9b21-0aab8050f728] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.011951708s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-537000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-537000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-537000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [691b6746-1326-424e-9b23-9d1a9541f7d2] Pending
helpers_test.go:344: "sp-pod" [691b6746-1326-424e-9b23-9d1a9541f7d2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [691b6746-1326-424e-9b23-9d1a9541f7d2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011587959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-537000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh -n functional-537000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 cp functional-537000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2638490680/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh -n functional-537000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/1437/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /etc/test/nested/copy/1437/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/1437.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /etc/ssl/certs/1437.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/1437.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /usr/share/ca-certificates/1437.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/14372.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /etc/ssl/certs/14372.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/14372.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /usr/share/ca-certificates/14372.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-537000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "sudo systemctl is-active crio": exit status 1 (61.637333ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-537000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-537000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-537000
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-537000 image ls --format short --alsologtostderr:
I0520 08:09:40.311196    2324 out.go:296] Setting OutFile to fd 1 ...
I0520 08:09:40.311478    2324 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.311482    2324 out.go:309] Setting ErrFile to fd 2...
I0520 08:09:40.311484    2324 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.311556    2324 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:09:40.311951    2324 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.312017    2324 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.312847    2324 ssh_runner.go:195] Run: systemctl --version
I0520 08:09:40.312856    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/functional-537000/id_rsa Username:docker}
I0520 08:09:40.341763    2324 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-537000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 510900496a6c3 | 40.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | 2ee705380c3c5 | 107MB  |
| docker.io/library/nginx                     | latest            | 6405d9b26fafc | 135MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-537000 | d0c0c6284533b | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-537000 | 425a39919a80d | 30B    |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-537000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 305d7ed1dae28 | 56.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.27.2           | 72c9df6be7f1b | 115MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | 29921a0845422 | 66.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-537000 image ls --format table --alsologtostderr:
I0520 08:09:43.727986    2336 out.go:296] Setting OutFile to fd 1 ...
I0520 08:09:43.729218    2336 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:43.729221    2336 out.go:309] Setting ErrFile to fd 2...
I0520 08:09:43.729224    2336 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:43.729301    2336 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:09:43.729710    2336 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:43.729766    2336 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:43.730555    2336 ssh_runner.go:195] Run: systemctl --version
I0520 08:09:43.730565    2336 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/functional-537000/id_rsa Username:docker}
I0520 08:09:43.760183    2336 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/05/20 08:09:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-537000 image ls --format json --alsologtostderr:
[{"id":"29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"66500000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-537000"],"size":"32900000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"d0c0c6284533bff5dae55de25fcfe99b88933ce777ededb75ab10fcb8248bc68","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-537000"],"size":"1410000"},{"id":"425a39919a80d34826e4ae3501d9d80012dabc
bb0d3a213a8d966bb53597e309","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-537000"],"size":"30"},{"id":"2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"107000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["g
cr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"115000000"},{"id":"6405d9b26fafcc65baf6cbacd0211bd624632da10d18cae7dc42220a00eb7655","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"135000000"},{"id":"510900496a6c312a512d8f4ba0c69586e0fbd540955d65869b6010174362c313","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40600000"},{"id":"305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"56200000"},{
"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"}]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-537000 image ls --format json --alsologtostderr:
I0520 08:09:43.646367    2334 out.go:296] Setting OutFile to fd 1 ...
I0520 08:09:43.646520    2334 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:43.646523    2334 out.go:309] Setting ErrFile to fd 2...
I0520 08:09:43.646526    2334 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:43.646595    2334 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:09:43.646975    2334 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:43.647031    2334 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:43.647807    2334 ssh_runner.go:195] Run: systemctl --version
I0520 08:09:43.647816    2334 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/functional-537000/id_rsa Username:docker}
I0520 08:09:43.676775    2334 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-537000 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-537000
size: "32900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "115000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 425a39919a80d34826e4ae3501d9d80012dabcbb0d3a213a8d966bb53597e309
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-537000
size: "30"
- id: 6405d9b26fafcc65baf6cbacd0211bd624632da10d18cae7dc42220a00eb7655
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "135000000"
- id: 510900496a6c312a512d8f4ba0c69586e0fbd540955d65869b6010174362c313
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40600000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "107000000"
- id: 305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "56200000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "66500000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-537000 image ls --format yaml --alsologtostderr:
I0520 08:09:40.391303    2326 out.go:296] Setting OutFile to fd 1 ...
I0520 08:09:40.391457    2326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.391461    2326 out.go:309] Setting ErrFile to fd 2...
I0520 08:09:40.391464    2326 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.391535    2326 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:09:40.391928    2326 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.391996    2326 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.392930    2326 ssh_runner.go:195] Run: systemctl --version
I0520 08:09:40.392939    2326 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/functional-537000/id_rsa Username:docker}
I0520 08:09:40.422433    2326 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh pgrep buildkitd: exit status 1 (64.215375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image build -t localhost/my-image:functional-537000 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 image build -t localhost/my-image:functional-537000 testdata/build --alsologtostderr: (3.023294125s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-537000 image build -t localhost/my-image:functional-537000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 5ca90bea5e73
Removing intermediate container 5ca90bea5e73
---> ed403d9c1615
Step 3/3 : ADD content.txt /
---> d0c0c6284533
Successfully built d0c0c6284533
Successfully tagged localhost/my-image:functional-537000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-537000 image build -t localhost/my-image:functional-537000 testdata/build --alsologtostderr:
I0520 08:09:40.537811    2330 out.go:296] Setting OutFile to fd 1 ...
I0520 08:09:40.540819    2330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.540823    2330 out.go:309] Setting ErrFile to fd 2...
I0520 08:09:40.540826    2330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0520 08:09:40.540912    2330 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16543-1012/.minikube/bin
I0520 08:09:40.541315    2330 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.551436    2330 config.go:182] Loaded profile config "functional-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0520 08:09:40.552520    2330 ssh_runner.go:195] Run: systemctl --version
I0520 08:09:40.552529    2330 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16543-1012/.minikube/machines/functional-537000/id_rsa Username:docker}
I0520 08:09:40.582362    2330 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2990576727.tar
I0520 08:09:40.582451    2330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 08:09:40.589523    2330 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2990576727.tar
I0520 08:09:40.591741    2330 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2990576727.tar: stat -c "%s %y" /var/lib/minikube/build/build.2990576727.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2990576727.tar': No such file or directory
I0520 08:09:40.591771    2330 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2990576727.tar --> /var/lib/minikube/build/build.2990576727.tar (3072 bytes)
I0520 08:09:40.604600    2330 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2990576727
I0520 08:09:40.609781    2330 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2990576727 -xf /var/lib/minikube/build/build.2990576727.tar
I0520 08:09:40.616891    2330 docker.go:336] Building image: /var/lib/minikube/build/build.2990576727
I0520 08:09:40.616969    2330 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-537000 /var/lib/minikube/build/build.2990576727
I0520 08:09:43.517605    2330 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-537000 /var/lib/minikube/build/build.2990576727: (2.900621167s)
I0520 08:09:43.517690    2330 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2990576727
I0520 08:09:43.520807    2330 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2990576727.tar
I0520 08:09:43.523503    2330 build_images.go:207] Built localhost/my-image:functional-537000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2990576727.tar
I0520 08:09:43.523521    2330 build_images.go:123] succeeded building to: functional-537000
I0520 08:09:43.523524    2330 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.375263584s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-537000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-537000 docker-env) && out/minikube-darwin-arm64 status -p functional-537000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-537000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-537000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-537000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-szz9k" [d7081496-1133-42e4-828f-e5f5f932e285] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-szz9k" [d7081496-1133-42e4-828f-e5f5f932e285] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.017070667s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr: (2.075241583s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr: (1.4706185s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.301336834s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-537000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 image load --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr: (1.763103875s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image save gcr.io/google-containers/addon-resizer:functional-537000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image rm gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-537000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 image save --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-darwin-arm64 -p functional-537000 image save --daemon gcr.io/google-containers/addon-resizer:functional-537000 --alsologtostderr: (1.584281708s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-537000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2106: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-537000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ada28ac1-9c21-4d6e-9b0b-77b9cc250f3d] Pending
helpers_test.go:344: "nginx-svc" [ada28ac1-9c21-4d6e-9b0b-77b9cc250f3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ada28ac1-9c21-4d6e-9b0b-77b9cc250f3d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.006325791s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service list -o json
functional_test.go:1492: Took "90.522875ms" to run "out/minikube-darwin-arm64 -p functional-537000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.105.4:32067
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.105.4:32067
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-537000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.212.197 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-537000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1313: Took "121.396ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1327: Took "32.750625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1364: Took "121.474875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1377: Took "33.540917ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2695583697/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1684595357187872000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2695583697/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1684595357187872000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2695583697/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1684595357187872000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2695583697/001/test-1684595357187872000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.800625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 15:09 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 15:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 15:09 test-1684595357187872000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh cat /mount-9p/test-1684595357187872000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-537000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7013467d-4e9d-4eff-aa16-d4b40efaab49] Pending
helpers_test.go:344: "busybox-mount" [7013467d-4e9d-4eff-aa16-d4b40efaab49] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7013467d-4e9d-4eff-aa16-d4b40efaab49] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7013467d-4e9d-4eff-aa16-d4b40efaab49] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007852792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-537000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2695583697/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2383681785/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.205791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2383681785/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "sudo umount -f /mount-9p": exit status 1 (67.12675ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-537000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2383681785/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-537000
--- PASS: TestFunctional/delete_addon-resizer_images (0.19s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-537000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-537000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-027000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-027000 --driver=qemu2 : (29.998375708s)
--- PASS: TestImageBuild/serial/Setup (30.00s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-027000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-027000: (2.204448083s)
--- PASS: TestImageBuild/serial/NormalBuild (2.20s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-027000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-027000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (81.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-371000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-371000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m21.482968708s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (81.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons enable ingress --alsologtostderr -v=5: (14.815458083s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-371000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-723000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0520 08:13:39.745306    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:39.753686    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:39.765854    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:39.787981    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:39.829053    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:39.909562    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:40.071732    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:40.394011    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:41.036348    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:42.318647    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:44.881065    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:13:50.003311    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
E0520 08:14:00.245687    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-723000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m12.692847042s)
--- PASS: TestJSONOutput/start/Command (72.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-723000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.32s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.34s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-723000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-723000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-723000 --output=json --user=testUser: (12.160832792s)
--- PASS: TestJSONOutput/stop/Command (12.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.36s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-703000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-703000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.998125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"15be64eb-4373-4790-9bd8-986eaa491849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-703000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ffba437-b7bd-4865-b0a8-aa804f3fa9ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16543"}}
	{"specversion":"1.0","id":"2d70279b-edba-495e-8aaa-f853939bd18f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig"}}
	{"specversion":"1.0","id":"bf87cd19-0198-4cee-ad94-48eddedd178d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3c7a5a69-d799-4293-9954-5ec0f569c03a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd66530c-6d52-40b6-9211-48406cced757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube"}}
	{"specversion":"1.0","id":"6508a463-d070-4c1b-a61d-4cfc671cf3d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b1da0453-8039-46d6-b565-a083d6052f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-703000
--- PASS: TestErrorJSONOutput (0.36s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-605000 --driver=qemu2 
E0520 08:14:20.727555    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-605000 --driver=qemu2 : (28.910990667s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-607000 --driver=qemu2 
E0520 08:15:01.689209    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16543-1012/.minikube/profiles/functional-537000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-607000 --driver=qemu2 : (31.855276583s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-605000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-607000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-607000
helpers_test.go:175: Cleaning up "first-605000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-605000
--- PASS: TestMinikubeProfile (61.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-088000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.681416ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-088000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16543
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16543-1012/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16543-1012/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.310834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-088000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-088000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-088000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.876ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-088000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-464000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-464000 -n old-k8s-version-464000: exit status 7 (28.747875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-464000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-641000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-641000 -n no-preload-641000: exit status 7 (27.490375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-641000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-782000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-782000 -n embed-certs-782000: exit status 7 (29.367833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-782000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-646000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-646000 -n default-k8s-diff-port-646000: exit status 7 (27.950583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-646000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-700000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-700000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-700000 -n newest-cni-700000: exit status 7 (29.116584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-700000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/242)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1: exit status 1 (75.456916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (59.101541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (61.542709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (59.4805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (59.96975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (58.943416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-537000 ssh "findmnt -T" /mount3: exit status 1 (59.140458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-537000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1384780782/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.85s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-021000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-021000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-021000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-021000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021000"

                                                
                                                
----------------------- debugLogs end: cilium-021000 [took: 2.12290975s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-021000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-155000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
Copied to clipboard