Test Report: QEMU_macOS 17116

                    
                      df10b09dbbeac24ae88706f418e89fa15ebc408d:2023-09-06:30896
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.25
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.96
22 TestAddons/Setup 44.98
23 TestCertOptions 10.02
24 TestCertExpiration 195.39
25 TestDockerFlags 10.12
26 TestForceSystemdFlag 11.57
27 TestForceSystemdEnv 9.9
72 TestFunctional/parallel/ServiceCmdConnect 30.35
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
139 TestImageBuild/serial/BuildWithBuildArg 1.09
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 59.23
183 TestMountStart/serial/StartWithMountFirst 10.29
186 TestMultiNode/serial/FreshStart2Nodes 9.85
187 TestMultiNode/serial/DeployApp2Nodes 107.45
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.11
194 TestMultiNode/serial/RestartKeepsNodes 5.37
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 20.08
202 TestPreload 9.92
204 TestScheduledStopUnix 9.93
205 TestSkaffold 11.81
208 TestRunningBinaryUpgrade 126.43
210 TestKubernetesUpgrade 15.19
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.43
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.09
225 TestStoppedBinaryUpgrade/Setup 115.72
227 TestPause/serial/Start 9.79
237 TestNoKubernetes/serial/StartWithK8s 9.8
238 TestNoKubernetes/serial/StartWithStopK8s 5.32
239 TestNoKubernetes/serial/Start 5.3
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/auto/Start 9.69
246 TestNetworkPlugins/group/kindnet/Start 9.85
247 TestNetworkPlugins/group/calico/Start 9.78
248 TestNetworkPlugins/group/custom-flannel/Start 9.81
249 TestStoppedBinaryUpgrade/Upgrade 2.27
250 TestStoppedBinaryUpgrade/MinikubeLogs 0.14
251 TestNetworkPlugins/group/false/Start 11.47
252 TestNetworkPlugins/group/enable-default-cni/Start 9.73
253 TestNetworkPlugins/group/flannel/Start 9.97
254 TestNetworkPlugins/group/bridge/Start 9.91
255 TestNetworkPlugins/group/kubenet/Start 9.74
257 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
259 TestStartStop/group/no-preload/serial/FirstStart 9.91
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
264 TestStartStop/group/old-k8s-version/serial/SecondStart 6.96
265 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
266 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
267 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
268 TestStartStop/group/old-k8s-version/serial/Pause 0.1
270 TestStartStop/group/embed-certs/serial/FirstStart 11.57
271 TestStartStop/group/no-preload/serial/DeployApp 0.1
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
275 TestStartStop/group/no-preload/serial/SecondStart 7.04
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
279 TestStartStop/group/no-preload/serial/Pause 0.1
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.38
282 TestStartStop/group/embed-certs/serial/DeployApp 0.1
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 7.05
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/embed-certs/serial/Pause 0.1
292 TestStartStop/group/newest-cni/serial/FirstStart 11.55
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.96
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.08
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.24
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (14.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-264000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-264000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.246999666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8003db8b-d2d8-40c9-8ae2-74e39f26f8d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-264000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b774e42-4bb7-4f91-8ac2-d02d3cc7ef66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17116"}}
	{"specversion":"1.0","id":"caeb81c8-84bd-48b2-8593-83c7f304a548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig"}}
	{"specversion":"1.0","id":"cfc900c0-1ae4-49dc-be6d-1a2e22de48c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d7884964-3526-4768-adf4-d62b4ad61df4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"95ee8fc5-9df4-4fa7-bb11-e6a10053d31f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube"}}
	{"specversion":"1.0","id":"a7949ab6-db5f-4ae8-9d2c-d6de1ce74c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"339e36bd-d5a5-44af-84e5-db7acc6586dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb8f7a53-0a75-4389-8a4c-14d6527dc19e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"27781efb-a879-4f23-8f2e-5c5617f2f91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5cd3eac-c0a8-4cb6-806c-002485d47da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-264000 in cluster download-only-264000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec389fe9-beb2-419b-affc-84f1fecf6e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8759801-a11d-4b0b-80dc-d15205d98581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68] Decompressors:map[bz2:0x14000653ce0 gz:0x14000653ce8 tar:0x14000653c90 tar.bz2:0x14000653ca0 tar.gz:0x14000653cb0 tar.xz:0x14000653cc0 tar.zst:0x14000653cd0 tbz2:0x14000653ca0 tgz:0x140006
53cb0 txz:0x14000653cc0 tzst:0x14000653cd0 xz:0x14000653cf0 zip:0x14000653d00 zst:0x14000653cf8] Getters:map[file:0x140011b6690 http:0x140011d2140 https:0x140011d2190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"8f47b007-4da5-4f5f-b372-7ea5f3f83ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:09:20.894122    1423 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:09:20.894234    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:20.894237    1423 out.go:309] Setting ErrFile to fd 2...
	I0906 12:09:20.894239    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:20.894343    1423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	W0906 12:09:20.894398    1423 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: no such file or directory
	I0906 12:09:20.895557    1423 out.go:303] Setting JSON to true
	I0906 12:09:20.912826    1423 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":534,"bootTime":1694026826,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:09:20.912887    1423 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:09:20.918164    1423 out.go:97] [download-only-264000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:09:20.918305    1423 notify.go:220] Checking for updates...
	I0906 12:09:20.922281    1423 out.go:169] MINIKUBE_LOCATION=17116
	W0906 12:09:20.918493    1423 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 12:09:20.930265    1423 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:09:20.934236    1423 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:09:20.937302    1423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:09:20.940325    1423 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	W0906 12:09:20.946193    1423 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 12:09:20.946376    1423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:09:20.952171    1423 out.go:97] Using the qemu2 driver based on user configuration
	I0906 12:09:20.952179    1423 start.go:298] selected driver: qemu2
	I0906 12:09:20.952184    1423 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:09:20.952270    1423 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:09:20.956388    1423 out.go:169] Automatically selected the socket_vmnet network
	I0906 12:09:20.961991    1423 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 12:09:20.962080    1423 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:09:20.962146    1423 cni.go:84] Creating CNI manager for ""
	I0906 12:09:20.962161    1423 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:09:20.962164    1423 start_flags.go:321] config:
	{Name:download-only-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:09:20.967687    1423 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:09:20.972585    1423 out.go:97] Downloading VM boot image ...
	I0906 12:09:20.972611    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0906 12:09:26.736143    1423 out.go:97] Starting control plane node download-only-264000 in cluster download-only-264000
	I0906 12:09:26.736171    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:26.793553    1423 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:09:26.793640    1423 cache.go:57] Caching tarball of preloaded images
	I0906 12:09:26.793805    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:26.797892    1423 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 12:09:26.797899    1423 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:26.874709    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:09:34.090349    1423 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:34.090482    1423 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:34.730353    1423 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 12:09:34.730535    1423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/download-only-264000/config.json ...
	I0906 12:09:34.730554    1423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/download-only-264000/config.json: {Name:mk223a71e1db329594e19bcb005209a7e85e101d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:09:34.730770    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:34.730936    1423 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0906 12:09:35.081317    1423 out.go:169] 
	W0906 12:09:35.085222    1423 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68] Decompressors:map[bz2:0x14000653ce0 gz:0x14000653ce8 tar:0x14000653c90 tar.bz2:0x14000653ca0 tar.gz:0x14000653cb0 tar.xz:0x14000653cc0 tar.zst:0x14000653cd0 tbz2:0x14000653ca0 tgz:0x14000653cb0 txz:0x14000653cc0 tzst:0x14000653cd0 xz:0x14000653cf0 zip:0x14000653d00 zst:0x14000653cf8] Getters:map[file:0x140011b6690 http:0x140011d2140 https:0x140011d2190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 12:09:35.085253    1423 out_reason.go:110] 
	W0906 12:09:35.091288    1423 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:09:35.094188    1423 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-264000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (14.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.82159775s)

                                                
                                                
-- stdout --
	* [offline-docker-326000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-326000 in cluster offline-docker-326000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-326000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:23:00.369081    2900 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:23:00.369207    2900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:00.369210    2900 out.go:309] Setting ErrFile to fd 2...
	I0906 12:23:00.369213    2900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:00.369328    2900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:23:00.370590    2900 out.go:303] Setting JSON to false
	I0906 12:23:00.387260    2900 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1354,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:23:00.387361    2900 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:23:00.391336    2900 out.go:177] * [offline-docker-326000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:23:00.399516    2900 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:23:00.403231    2900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:23:00.399556    2900 notify.go:220] Checking for updates...
	I0906 12:23:00.409305    2900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:23:00.412211    2900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:23:00.415260    2900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:23:00.418287    2900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:23:00.421473    2900 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:00.421522    2900 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:23:00.425260    2900 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:23:00.431247    2900 start.go:298] selected driver: qemu2
	I0906 12:23:00.431256    2900 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:23:00.431263    2900 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:23:00.433087    2900 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:23:00.436233    2900 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:23:00.439441    2900 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:23:00.439466    2900 cni.go:84] Creating CNI manager for ""
	I0906 12:23:00.439473    2900 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:23:00.439480    2900 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:23:00.439488    2900 start_flags.go:321] config:
	{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:23:00.443633    2900 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:23:00.447310    2900 out.go:177] * Starting control plane node offline-docker-326000 in cluster offline-docker-326000
	I0906 12:23:00.455296    2900 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:23:00.455322    2900 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:23:00.455336    2900 cache.go:57] Caching tarball of preloaded images
	I0906 12:23:00.455398    2900 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:23:00.455403    2900 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:23:00.455471    2900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/offline-docker-326000/config.json ...
	I0906 12:23:00.455483    2900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/offline-docker-326000/config.json: {Name:mkd1c4b65eb3036e591197eaf690a058f20627ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:23:00.455712    2900 start.go:365] acquiring machines lock for offline-docker-326000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:00.455741    2900 start.go:369] acquired machines lock for "offline-docker-326000" in 21.042µs
	I0906 12:23:00.455752    2900 start.go:93] Provisioning new machine with config: &{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:00.455783    2900 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:00.460272    2900 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:00.474211    2900 start.go:159] libmachine.API.Create for "offline-docker-326000" (driver="qemu2")
	I0906 12:23:00.474235    2900 client.go:168] LocalClient.Create starting
	I0906 12:23:00.474300    2900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:00.474324    2900 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:00.474336    2900 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:00.474377    2900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:00.474395    2900 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:00.474402    2900 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:00.474728    2900 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:00.593862    2900 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:00.645246    2900 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:00.645263    2900 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:00.645481    2900 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:00.654151    2900 main.go:141] libmachine: STDOUT: 
	I0906 12:23:00.654167    2900 main.go:141] libmachine: STDERR: 
	I0906 12:23:00.654228    2900 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2 +20000M
	I0906 12:23:00.662631    2900 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:00.662646    2900 main.go:141] libmachine: STDERR: 
	I0906 12:23:00.662663    2900 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:00.662669    2900 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:00.662702    2900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:23:48:a6:06:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:00.664499    2900 main.go:141] libmachine: STDOUT: 
	I0906 12:23:00.664513    2900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:00.664530    2900 client.go:171] LocalClient.Create took 190.293083ms
	I0906 12:23:02.666297    2900 start.go:128] duration metric: createHost completed in 2.210564917s
	I0906 12:23:02.666325    2900 start.go:83] releasing machines lock for "offline-docker-326000", held for 2.210637375s
	W0906 12:23:02.666349    2900 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:02.670967    2900 out.go:177] * Deleting "offline-docker-326000" in qemu2 ...
	W0906 12:23:02.678568    2900 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:02.678579    2900 start.go:687] Will try again in 5 seconds ...
	I0906 12:23:07.680754    2900 start.go:365] acquiring machines lock for offline-docker-326000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:07.681208    2900 start.go:369] acquired machines lock for "offline-docker-326000" in 361.25µs
	I0906 12:23:07.681333    2900 start.go:93] Provisioning new machine with config: &{Name:offline-docker-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-326000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:07.681590    2900 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:07.690176    2900 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:07.737815    2900 start.go:159] libmachine.API.Create for "offline-docker-326000" (driver="qemu2")
	I0906 12:23:07.737863    2900 client.go:168] LocalClient.Create starting
	I0906 12:23:07.738019    2900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:07.738090    2900 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:07.738107    2900 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:07.738219    2900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:07.738266    2900 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:07.738293    2900 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:07.738863    2900 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:07.866826    2900 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:08.106650    2900 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:08.106661    2900 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:08.106833    2900 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:08.115966    2900 main.go:141] libmachine: STDOUT: 
	I0906 12:23:08.115983    2900 main.go:141] libmachine: STDERR: 
	I0906 12:23:08.116056    2900 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2 +20000M
	I0906 12:23:08.123203    2900 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:08.123215    2900 main.go:141] libmachine: STDERR: 
	I0906 12:23:08.123232    2900 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:08.123238    2900 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:08.123278    2900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:df:e1:14:f1:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/offline-docker-326000/disk.qcow2
	I0906 12:23:08.124690    2900 main.go:141] libmachine: STDOUT: 
	I0906 12:23:08.124714    2900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:08.124726    2900 client.go:171] LocalClient.Create took 386.867542ms
	I0906 12:23:10.126790    2900 start.go:128] duration metric: createHost completed in 2.445246167s
	I0906 12:23:10.126843    2900 start.go:83] releasing machines lock for "offline-docker-326000", held for 2.445682s
	W0906 12:23:10.127106    2900 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:10.135127    2900 out.go:177] 
	W0906 12:23:10.139053    2900 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:23:10.139078    2900 out.go:239] * 
	* 
	W0906 12:23:10.141114    2900 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:10.153095    2900 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-326000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-09-06 12:23:10.164914 -0700 PDT m=+829.429332418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-326000 -n offline-docker-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-326000 -n offline-docker-326000: exit status 7 (39.404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-326000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-326000
--- FAIL: TestOffline (9.96s)

                                                
                                    
x
+
TestAddons/Setup (44.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-195000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-195000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.976044833s)

                                                
                                                
-- stdout --
	* [addons-195000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-195000 in cluster addons-195000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	  - Using image docker.io/registry:2.8.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:09:51.906018    1499 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:09:51.906128    1499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:51.906130    1499 out.go:309] Setting ErrFile to fd 2...
	I0906 12:09:51.906133    1499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:51.906239    1499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:09:51.907286    1499 out.go:303] Setting JSON to false
	I0906 12:09:51.922195    1499 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":565,"bootTime":1694026826,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:09:51.922248    1499 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:09:51.927436    1499 out.go:177] * [addons-195000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:09:51.934401    1499 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:09:51.937473    1499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:09:51.934453    1499 notify.go:220] Checking for updates...
	I0906 12:09:51.950844    1499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:09:51.954417    1499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:09:51.957499    1499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:09:51.960459    1499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:09:51.963487    1499 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:09:51.967371    1499 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:09:51.974340    1499 start.go:298] selected driver: qemu2
	I0906 12:09:51.974346    1499 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:09:51.974351    1499 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:09:51.976363    1499 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:09:51.979456    1499 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:09:51.982461    1499 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:09:51.982482    1499 cni.go:84] Creating CNI manager for ""
	I0906 12:09:51.982489    1499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:09:51.982494    1499 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:09:51.982501    1499 start_flags.go:321] config:
	{Name:addons-195000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0906 12:09:51.988318    1499 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:09:51.996559    1499 out.go:177] * Starting control plane node addons-195000 in cluster addons-195000
	I0906 12:09:52.000393    1499 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:09:52.000411    1499 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:09:52.000434    1499 cache.go:57] Caching tarball of preloaded images
	I0906 12:09:52.000494    1499 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:09:52.000499    1499 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:09:52.000705    1499 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/config.json ...
	I0906 12:09:52.000717    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/config.json: {Name:mk114b972deb058c608c1d737d8dcbc48d6407fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:09:52.000928    1499 start.go:365] acquiring machines lock for addons-195000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:09:52.001023    1499 start.go:369] acquired machines lock for "addons-195000" in 88.833µs
	I0906 12:09:52.001034    1499 start.go:93] Provisioning new machine with config: &{Name:addons-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:09:52.001064    1499 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:09:52.009413    1499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 12:09:52.332046    1499 start.go:159] libmachine.API.Create for "addons-195000" (driver="qemu2")
	I0906 12:09:52.332082    1499 client.go:168] LocalClient.Create starting
	I0906 12:09:52.332229    1499 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:09:52.411484    1499 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:09:52.507818    1499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:09:53.027538    1499 main.go:141] libmachine: Creating SSH key...
	I0906 12:09:53.179878    1499 main.go:141] libmachine: Creating Disk image...
	I0906 12:09:53.179888    1499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:09:53.181007    1499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2
	I0906 12:09:53.215553    1499 main.go:141] libmachine: STDOUT: 
	I0906 12:09:53.215574    1499 main.go:141] libmachine: STDERR: 
	I0906 12:09:53.215636    1499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2 +20000M
	I0906 12:09:53.223129    1499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:09:53.223173    1499 main.go:141] libmachine: STDERR: 
	I0906 12:09:53.223189    1499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2
	I0906 12:09:53.223197    1499 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:09:53.223236    1499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:df:5f:68:42:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/disk.qcow2
	I0906 12:09:53.291035    1499 main.go:141] libmachine: STDOUT: 
	I0906 12:09:53.291098    1499 main.go:141] libmachine: STDERR: 
	I0906 12:09:53.291102    1499 main.go:141] libmachine: Attempt 0
	I0906 12:09:53.291122    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:09:55.293278    1499 main.go:141] libmachine: Attempt 1
	I0906 12:09:55.293366    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:09:57.295630    1499 main.go:141] libmachine: Attempt 2
	I0906 12:09:57.295655    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:09:59.297718    1499 main.go:141] libmachine: Attempt 3
	I0906 12:09:59.297730    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:10:01.299772    1499 main.go:141] libmachine: Attempt 4
	I0906 12:10:01.299790    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:10:03.301939    1499 main.go:141] libmachine: Attempt 5
	I0906 12:10:03.301980    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:10:05.302740    1499 main.go:141] libmachine: Attempt 6
	I0906 12:10:05.302771    1499 main.go:141] libmachine: Searching for 92:df:5f:68:42:49 in /var/db/dhcpd_leases ...
	I0906 12:10:05.302918    1499 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 12:10:05.302963    1499 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:10:05.302970    1499 main.go:141] libmachine: Found match: 92:df:5f:68:42:49
	I0906 12:10:05.302980    1499 main.go:141] libmachine: IP: 192.168.105.2
	I0906 12:10:05.302995    1499 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0906 12:10:07.322290    1499 machine.go:88] provisioning docker machine ...
	I0906 12:10:07.322352    1499 buildroot.go:166] provisioning hostname "addons-195000"
	I0906 12:10:07.323797    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:07.325525    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:07.325564    1499 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-195000 && echo "addons-195000" | sudo tee /etc/hostname
	I0906 12:10:07.408905    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-195000
	
	I0906 12:10:07.409037    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:07.409516    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:07.409531    1499 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-195000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-195000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-195000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:10:07.474185    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:10:07.474203    1499 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17116-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17116-1006/.minikube}
	I0906 12:10:07.474222    1499 buildroot.go:174] setting up certificates
	I0906 12:10:07.474230    1499 provision.go:83] configureAuth start
	I0906 12:10:07.474236    1499 provision.go:138] copyHostCerts
	I0906 12:10:07.474400    1499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem (1679 bytes)
	I0906 12:10:07.474735    1499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem (1078 bytes)
	I0906 12:10:07.474883    1499 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem (1123 bytes)
	I0906 12:10:07.474992    1499 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem org=jenkins.addons-195000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-195000]
	I0906 12:10:07.560698    1499 provision.go:172] copyRemoteCerts
	I0906 12:10:07.560754    1499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:10:07.560762    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:07.587882    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:10:07.594647    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 12:10:07.601278    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:10:07.608609    1499 provision.go:86] duration metric: configureAuth took 134.37575ms
	I0906 12:10:07.608617    1499 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:10:07.608710    1499 config.go:182] Loaded profile config "addons-195000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:10:07.608747    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:07.608961    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:07.608966    1499 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:10:07.658675    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:10:07.658685    1499 buildroot.go:70] root file system type: tmpfs
	I0906 12:10:07.658742    1499 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:10:07.658790    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:07.659028    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:07.659066    1499 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:10:07.713937    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:10:07.713985    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:07.714229    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:07.714240    1499 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:10:08.060644    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:10:08.060655    1499 machine.go:91] provisioned docker machine in 738.343542ms
	I0906 12:10:08.060660    1499 client.go:171] LocalClient.Create took 15.728688917s
	I0906 12:10:08.060666    1499 start.go:167] duration metric: libmachine.API.Create for "addons-195000" took 15.728742583s
	I0906 12:10:08.060670    1499 start.go:300] post-start starting for "addons-195000" (driver="qemu2")
	I0906 12:10:08.060675    1499 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:10:08.060752    1499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:10:08.060763    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:08.088622    1499 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:10:08.090013    1499 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:10:08.090024    1499 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/addons for local assets ...
	I0906 12:10:08.090092    1499 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/files for local assets ...
	I0906 12:10:08.090119    1499 start.go:303] post-start completed in 29.445958ms
	I0906 12:10:08.090497    1499 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/config.json ...
	I0906 12:10:08.090644    1499 start.go:128] duration metric: createHost completed in 16.089694541s
	I0906 12:10:08.090664    1499 main.go:141] libmachine: Using SSH client type: native
	I0906 12:10:08.090900    1499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c123b0] 0x104c14e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 12:10:08.090904    1499 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:10:08.139136    1499 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694027408.456001835
	
	I0906 12:10:08.139146    1499 fix.go:206] guest clock: 1694027408.456001835
	I0906 12:10:08.139151    1499 fix.go:219] Guest: 2023-09-06 12:10:08.456001835 -0700 PDT Remote: 2023-09-06 12:10:08.090646 -0700 PDT m=+16.202625543 (delta=365.355835ms)
	I0906 12:10:08.139163    1499 fix.go:190] guest clock delta is within tolerance: 365.355835ms
	I0906 12:10:08.139166    1499 start.go:83] releasing machines lock for "addons-195000", held for 16.138256042s
	I0906 12:10:08.139537    1499 ssh_runner.go:195] Run: cat /version.json
	I0906 12:10:08.139546    1499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:10:08.139547    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:08.139587    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:08.166375    1499 ssh_runner.go:195] Run: systemctl --version
	I0906 12:10:08.210321    1499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:10:08.212250    1499 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:10:08.212281    1499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:10:08.217462    1499 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:10:08.217469    1499 start.go:466] detecting cgroup driver to use...
	I0906 12:10:08.217579    1499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:10:08.223108    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 12:10:08.226053    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:10:08.229226    1499 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:10:08.229253    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:10:08.232651    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:10:08.236124    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:10:08.239430    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:10:08.242396    1499 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:10:08.245273    1499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:10:08.248574    1499 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:10:08.251339    1499 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:10:08.253936    1499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:10:08.333560    1499 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:10:08.342488    1499 start.go:466] detecting cgroup driver to use...
	I0906 12:10:08.342558    1499 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:10:08.348650    1499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:10:08.353369    1499 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:10:08.359419    1499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:10:08.363695    1499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:10:08.368417    1499 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:10:08.408502    1499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:10:08.413763    1499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:10:08.419158    1499 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:10:08.420521    1499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:10:08.423472    1499 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:10:08.428686    1499 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:10:08.516983    1499 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:10:08.596636    1499 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:10:08.596651    1499 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 12:10:08.602158    1499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:10:08.685887    1499 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:10:09.850197    1499 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164303958s)
	I0906 12:10:09.850242    1499 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:10:09.929871    1499 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:10:10.009865    1499 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:10:10.088672    1499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:10:10.157843    1499 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:10:10.164971    1499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:10:10.244458    1499 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 12:10:10.268075    1499 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:10:10.268168    1499 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:10:10.270404    1499 start.go:534] Will wait 60s for crictl version
	I0906 12:10:10.270443    1499 ssh_runner.go:195] Run: which crictl
	I0906 12:10:10.271967    1499 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:10:10.286944    1499 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0906 12:10:10.287035    1499 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:10:10.296731    1499 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:10:10.311292    1499 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0906 12:10:10.311438    1499 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 12:10:10.312812    1499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:10:10.316575    1499 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:10:10.316619    1499 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:10:10.321910    1499 docker.go:636] Got preloaded images: 
	I0906 12:10:10.321920    1499 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0906 12:10:10.321968    1499 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:10:10.324901    1499 ssh_runner.go:195] Run: which lz4
	I0906 12:10:10.326357    1499 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 12:10:10.327713    1499 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:10:10.327725    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0906 12:10:11.676853    1499 docker.go:600] Took 1.350550 seconds to copy over tarball
	I0906 12:10:11.676907    1499 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:10:12.720405    1499 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.0434935s)
	I0906 12:10:12.720421    1499 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:10:12.736691    1499 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:10:12.740387    1499 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0906 12:10:12.745675    1499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:10:12.814683    1499 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:10:15.010484    1499 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.195800541s)
	I0906 12:10:15.010568    1499 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:10:15.016157    1499 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:10:15.016169    1499 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:10:15.016234    1499 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:10:15.023849    1499 cni.go:84] Creating CNI manager for ""
	I0906 12:10:15.023858    1499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:10:15.023885    1499 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 12:10:15.023894    1499 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-195000 NodeName:addons-195000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:10:15.023970    1499 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-195000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:10:15.024027    1499 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-195000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 12:10:15.024093    1499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 12:10:15.027379    1499 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:10:15.027406    1499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:10:15.030008    1499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0906 12:10:15.034826    1499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:10:15.039422    1499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0906 12:10:15.044540    1499 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0906 12:10:15.045981    1499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:10:15.049505    1499 certs.go:56] Setting up /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000 for IP: 192.168.105.2
	I0906 12:10:15.049515    1499 certs.go:190] acquiring lock for shared ca certs: {Name:mk2fda2e4681223badcda373e6897c8a04d70962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.049666    1499 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key
	I0906 12:10:15.094930    1499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt ...
	I0906 12:10:15.094938    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt: {Name:mk346c76ca64067d85779f2957e733095e591b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.095128    1499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key ...
	I0906 12:10:15.095132    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key: {Name:mkd5536e35498613fa9d72a993cb163be4fa8874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.095238    1499 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key
	I0906 12:10:15.226394    1499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt ...
	I0906 12:10:15.226401    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt: {Name:mk5b8bc52fc89bbd51cd9e757b98f11b13d7c285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.226642    1499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key ...
	I0906 12:10:15.226646    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key: {Name:mkf64eee80395d4b71149b9be38400e59fa03ed8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.226776    1499 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.key
	I0906 12:10:15.226783    1499 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.crt with IP's: []
	I0906 12:10:15.382799    1499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.crt ...
	I0906 12:10:15.382812    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.crt: {Name:mk457ec185b05a6e1b702913114d7f00cb6c547a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.383037    1499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.key ...
	I0906 12:10:15.383040    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/client.key: {Name:mk11735281acdb377e1753e91be096d5d0496891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.383137    1499 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key.96055969
	I0906 12:10:15.383149    1499 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 12:10:15.487438    1499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt.96055969 ...
	I0906 12:10:15.487442    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt.96055969: {Name:mk44c4e5ccbce1dccfb5c39092f709bb46db2fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.487594    1499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key.96055969 ...
	I0906 12:10:15.487597    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key.96055969: {Name:mk63c1ffe5f2c471d90c9fdd5f09cf5bb34c2eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.487703    1499 certs.go:337] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt
	I0906 12:10:15.487906    1499 certs.go:341] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key
	I0906 12:10:15.488000    1499 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.key
	I0906 12:10:15.488015    1499 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.crt with IP's: []
	I0906 12:10:15.728249    1499 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.crt ...
	I0906 12:10:15.728257    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.crt: {Name:mkc1b4cb9ccfce9a06a3ab2c88f57152823de02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.728445    1499 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.key ...
	I0906 12:10:15.728448    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.key: {Name:mkd91cea15b6b3105bf24ec267aafb8ab4dfee90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:15.728686    1499 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:10:15.728710    1499 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:10:15.728729    1499 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:10:15.728749    1499 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem (1679 bytes)
	I0906 12:10:15.729018    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 12:10:15.736743    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:10:15.744011    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:10:15.750979    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/addons-195000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:10:15.757508    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:10:15.764505    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:10:15.771681    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:10:15.778297    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:10:15.784930    1499 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:10:15.791953    1499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:10:15.797880    1499 ssh_runner.go:195] Run: openssl version
	I0906 12:10:15.799918    1499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:10:15.802799    1499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:10:15.804251    1499 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:10 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:10:15.804273    1499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:10:15.806127    1499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:10:15.809305    1499 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 12:10:15.810557    1499 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 12:10:15.810600    1499 kubeadm.go:404] StartCluster: {Name:addons-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:10:15.810669    1499 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:10:15.816019    1499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:10:15.819090    1499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:10:15.821818    1499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:10:15.824848    1499 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:10:15.824861    1499 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:10:15.846384    1499 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 12:10:15.846416    1499 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 12:10:15.900813    1499 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:10:15.900890    1499 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:10:15.900934    1499 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:10:15.958648    1499 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:10:15.966849    1499 out.go:204]   - Generating certificates and keys ...
	I0906 12:10:15.966890    1499 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 12:10:15.966926    1499 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 12:10:16.098687    1499 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 12:10:16.275935    1499 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 12:10:16.444296    1499 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 12:10:16.566911    1499 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 12:10:16.607087    1499 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 12:10:16.607149    1499 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-195000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 12:10:16.706006    1499 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 12:10:16.706066    1499 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-195000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 12:10:16.910384    1499 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 12:10:16.971746    1499 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 12:10:17.044385    1499 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 12:10:17.044413    1499 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:10:17.199052    1499 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:10:17.409309    1499 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:10:17.469937    1499 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:10:17.532962    1499 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:10:17.533200    1499 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:10:17.534273    1499 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:10:17.538600    1499 out.go:204]   - Booting up control plane ...
	I0906 12:10:17.538707    1499 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:10:17.538758    1499 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:10:17.538798    1499 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:10:17.541582    1499 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:10:17.541631    1499 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:10:17.541649    1499 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 12:10:17.631031    1499 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:10:21.635888    1499 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.004897 seconds
	I0906 12:10:21.635950    1499 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:10:21.641546    1499 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:10:22.150191    1499 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:10:22.150297    1499 kubeadm.go:322] [mark-control-plane] Marking the node addons-195000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:10:22.656097    1499 kubeadm.go:322] [bootstrap-token] Using token: nzuq8z.nxpv5yp2bpc153hu
	I0906 12:10:22.659266    1499 out.go:204]   - Configuring RBAC rules ...
	I0906 12:10:22.659315    1499 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:10:22.660157    1499 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:10:22.665920    1499 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:10:22.667220    1499 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:10:22.668630    1499 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:10:22.669846    1499 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:10:22.674101    1499 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:10:22.860769    1499 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 12:10:23.063141    1499 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 12:10:23.063637    1499 kubeadm.go:322] 
	I0906 12:10:23.063673    1499 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 12:10:23.063677    1499 kubeadm.go:322] 
	I0906 12:10:23.063717    1499 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 12:10:23.063720    1499 kubeadm.go:322] 
	I0906 12:10:23.063732    1499 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 12:10:23.063771    1499 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:10:23.063797    1499 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:10:23.063800    1499 kubeadm.go:322] 
	I0906 12:10:23.063825    1499 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 12:10:23.063829    1499 kubeadm.go:322] 
	I0906 12:10:23.063856    1499 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:10:23.063861    1499 kubeadm.go:322] 
	I0906 12:10:23.063894    1499 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 12:10:23.063937    1499 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:10:23.063972    1499 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:10:23.063975    1499 kubeadm.go:322] 
	I0906 12:10:23.064018    1499 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:10:23.064069    1499 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 12:10:23.064074    1499 kubeadm.go:322] 
	I0906 12:10:23.064120    1499 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nzuq8z.nxpv5yp2bpc153hu \
	I0906 12:10:23.064178    1499 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d \
	I0906 12:10:23.064191    1499 kubeadm.go:322] 	--control-plane 
	I0906 12:10:23.064193    1499 kubeadm.go:322] 
	I0906 12:10:23.064252    1499 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:10:23.064258    1499 kubeadm.go:322] 
	I0906 12:10:23.064294    1499 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nzuq8z.nxpv5yp2bpc153hu \
	I0906 12:10:23.064352    1499 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d 
	I0906 12:10:23.064410    1499 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:10:23.064419    1499 cni.go:84] Creating CNI manager for ""
	I0906 12:10:23.064426    1499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:10:23.072038    1499 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:10:23.075174    1499 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:10:23.078276    1499 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 12:10:23.082996    1499 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:10:23.083049    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:23.083049    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=addons-195000 minikube.k8s.io/updated_at=2023_09_06T12_10_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:23.086183    1499 ops.go:34] apiserver oom_adj: -16
	I0906 12:10:23.151094    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:23.186067    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:23.719681    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:24.219706    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:24.719671    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:25.219682    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:25.719643    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:26.219617    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:26.719642    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:27.219620    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:27.719617    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:28.218118    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:28.719605    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:29.218080    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:29.719615    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:30.219630    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:30.719621    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:31.219605    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:31.719627    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:32.219568    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:32.718617    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:33.219536    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:33.719524    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:34.219555    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:34.717768    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:35.219556    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:35.717670    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:36.219540    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:36.719545    1499 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:10:36.759083    1499 kubeadm.go:1081] duration metric: took 13.676165709s to wait for elevateKubeSystemPrivileges.
	I0906 12:10:36.759099    1499 kubeadm.go:406] StartCluster complete in 20.948653166s
	I0906 12:10:36.759108    1499 settings.go:142] acquiring lock: {Name:mkdab5683cd98d968361f82dee37aa31492af7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:36.759273    1499 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:10:36.759540    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/kubeconfig: {Name:mk69a76938a18011410dd32eccb7fee080824c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:10:36.759746    1499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 12:10:36.759786    1499 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0906 12:10:36.759839    1499 addons.go:69] Setting volumesnapshots=true in profile "addons-195000"
	I0906 12:10:36.759846    1499 addons.go:231] Setting addon volumesnapshots=true in "addons-195000"
	I0906 12:10:36.759891    1499 config.go:182] Loaded profile config "addons-195000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:10:36.759895    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.759899    1499 addons.go:69] Setting metrics-server=true in profile "addons-195000"
	I0906 12:10:36.759915    1499 addons.go:231] Setting addon metrics-server=true in "addons-195000"
	I0906 12:10:36.759919    1499 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-195000"
	I0906 12:10:36.759932    1499 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-195000"
	I0906 12:10:36.759930    1499 addons.go:69] Setting ingress=true in profile "addons-195000"
	I0906 12:10:36.759946    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.759953    1499 addons.go:231] Setting addon ingress=true in "addons-195000"
	I0906 12:10:36.759960    1499 addons.go:69] Setting storage-provisioner=true in profile "addons-195000"
	I0906 12:10:36.759965    1499 addons.go:231] Setting addon storage-provisioner=true in "addons-195000"
	I0906 12:10:36.759958    1499 addons.go:69] Setting registry=true in profile "addons-195000"
	I0906 12:10:36.759977    1499 addons.go:69] Setting default-storageclass=true in profile "addons-195000"
	I0906 12:10:36.759983    1499 addons.go:69] Setting gcp-auth=true in profile "addons-195000"
	I0906 12:10:36.759964    1499 addons.go:69] Setting ingress-dns=true in profile "addons-195000"
	I0906 12:10:36.759992    1499 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-195000"
	I0906 12:10:36.759979    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.759989    1499 mustload.go:65] Loading cluster: addons-195000
	I0906 12:10:36.759946    1499 addons.go:69] Setting inspektor-gadget=true in profile "addons-195000"
	I0906 12:10:36.760042    1499 addons.go:231] Setting addon ingress-dns=true in "addons-195000"
	I0906 12:10:36.760049    1499 addons.go:231] Setting addon inspektor-gadget=true in "addons-195000"
	I0906 12:10:36.760082    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.760103    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.759956    1499 addons.go:69] Setting cloud-spanner=true in profile "addons-195000"
	I0906 12:10:36.760197    1499 addons.go:231] Setting addon cloud-spanner=true in "addons-195000"
	I0906 12:10:36.760213    1499 host.go:66] Checking if "addons-195000" exists ...
	W0906 12:10:36.760213    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.760223    1499 addons.go:277] "addons-195000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0906 12:10:36.760226    1499 addons.go:467] Verifying addon metrics-server=true in "addons-195000"
	I0906 12:10:36.760006    1499 host.go:66] Checking if "addons-195000" exists ...
	W0906 12:10:36.760252    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.760265    1499 addons.go:277] "addons-195000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0906 12:10:36.760307    1499 config.go:182] Loaded profile config "addons-195000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	W0906 12:10:36.760444    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.760449    1499 addons.go:277] "addons-195000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0906 12:10:36.760452    1499 addons.go:467] Verifying addon ingress=true in "addons-195000"
	I0906 12:10:36.759980    1499 addons.go:231] Setting addon registry=true in "addons-195000"
	I0906 12:10:36.763872    1499 out.go:177] * Verifying ingress addon...
	I0906 12:10:36.759954    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.760514    1499 host.go:66] Checking if "addons-195000" exists ...
	W0906 12:10:36.760505    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.760543    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.760569    1499 host.go:54] host status for "addons-195000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	W0906 12:10:36.772920    1499 addons_storage_classes.go:55] "addons-195000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0906 12:10:36.772988    1499 addons.go:277] "addons-195000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0906 12:10:36.772988    1499 addons.go:277] "addons-195000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0906 12:10:36.773355    1499 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 12:10:36.775895    1499 out.go:177] 
	I0906 12:10:36.779900    1499 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 12:10:36.782881    1499 addons.go:231] Setting addon default-storageclass=true in "addons-195000"
	I0906 12:10:36.782886    1499 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0906 12:10:36.785313    1499 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-195000" context rescaled to 1 replicas
	I0906 12:10:36.785895    1499 out.go:177]   - Using image docker.io/registry:2.8.1
	I0906 12:10:36.799888    1499 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0906 12:10:36.791979    1499 host.go:66] Checking if "addons-195000" exists ...
	I0906 12:10:36.791990    1499 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:10:36.792009    1499 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 12:10:36.797558    1499 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 12:10:36.799937    1499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0906 12:10:36.802956    1499 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 12:10:36.803643    1499 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	W0906 12:10:36.806821    1499 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/monitor: connect: connection refused
	I0906 12:10:36.808803    1499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 12:10:36.814849    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:36.814862    1499 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0906 12:10:36.822699    1499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 12:10:36.822701    1499 out.go:177] * Verifying Kubernetes components...
	I0906 12:10:36.822704    1499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	W0906 12:10:36.822708    1499 out.go:239] * 
	* 
	I0906 12:10:36.826826    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:36.834823    1499 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 12:10:36.827146    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:36.842874    1499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:36.842891    1499 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	W0906 12:10:36.827247    1499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:10:36.842909    1499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0906 12:10:36.846875    1499 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/addons-195000/id_rsa Username:docker}
	I0906 12:10:36.849867    1499 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-195000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.98s)

                                                
                                    
x
+
TestCertOptions (10.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-186000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-186000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.747335666s)

                                                
                                                
-- stdout --
	* [cert-options-186000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-186000 in cluster cert-options-186000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-186000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-186000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-186000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-186000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (77.0995ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-186000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-186000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-186000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-186000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-186000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.478875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-186000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-186000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-186000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-06 12:23:40.212426 -0700 PDT m=+859.477654834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-186000 -n cert-options-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-186000 -n cert-options-186000: exit status 7 (29.238667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-186000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-186000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-186000
--- FAIL: TestCertOptions (10.02s)
E0906 12:24:00.040303    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:24:27.749140    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:24:34.672706    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.991595125s)

                                                
                                                
-- stdout --
	* [cert-expiration-096000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-096000 in cluster cert-expiration-096000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-096000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-096000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.226539125s)

                                                
                                                
-- stdout --
	* [cert-expiration-096000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-096000 in cluster cert-expiration-096000
	* Restarting existing qemu2 VM for "cert-expiration-096000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-096000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-096000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-096000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-096000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-096000 in cluster cert-expiration-096000
	* Restarting existing qemu2 VM for "cert-expiration-096000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-096000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-096000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-06 12:26:40.296039 -0700 PDT m=+1039.566126043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-096000 -n cert-expiration-096000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-096000 -n cert-expiration-096000: exit status 7 (68.876167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-096000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-096000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-096000
--- FAIL: TestCertExpiration (195.39s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-104000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-104000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.869893625s)

                                                
                                                
-- stdout --
	* [docker-flags-104000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-104000 in cluster docker-flags-104000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-104000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:23:20.227971    3096 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:23:20.228107    3096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:20.228109    3096 out.go:309] Setting ErrFile to fd 2...
	I0906 12:23:20.228112    3096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:20.228215    3096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:23:20.229220    3096 out.go:303] Setting JSON to false
	I0906 12:23:20.244251    3096 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1374,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:23:20.244315    3096 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:23:20.249189    3096 out.go:177] * [docker-flags-104000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:23:20.257273    3096 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:23:20.261228    3096 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:23:20.257340    3096 notify.go:220] Checking for updates...
	I0906 12:23:20.267253    3096 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:23:20.270185    3096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:23:20.273256    3096 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:23:20.276257    3096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:23:20.277939    3096 config.go:182] Loaded profile config "force-systemd-flag-267000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:20.278005    3096 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:20.278059    3096 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:23:20.282227    3096 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:23:20.289069    3096 start.go:298] selected driver: qemu2
	I0906 12:23:20.289076    3096 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:23:20.289082    3096 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:23:20.291047    3096 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:23:20.294264    3096 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:23:20.297342    3096 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0906 12:23:20.297380    3096 cni.go:84] Creating CNI manager for ""
	I0906 12:23:20.297387    3096 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:23:20.297391    3096 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:23:20.297396    3096 start_flags.go:321] config:
	{Name:docker-flags-104000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:23:20.301552    3096 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:23:20.308213    3096 out.go:177] * Starting control plane node docker-flags-104000 in cluster docker-flags-104000
	I0906 12:23:20.312298    3096 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:23:20.312325    3096 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:23:20.312350    3096 cache.go:57] Caching tarball of preloaded images
	I0906 12:23:20.312427    3096 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:23:20.312435    3096 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:23:20.312508    3096 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/docker-flags-104000/config.json ...
	I0906 12:23:20.312521    3096 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/docker-flags-104000/config.json: {Name:mkadac5cfc6e5b513182ec8833a3b693f7a124cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:23:20.312728    3096 start.go:365] acquiring machines lock for docker-flags-104000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:20.312762    3096 start.go:369] acquired machines lock for "docker-flags-104000" in 25.667µs
	I0906 12:23:20.312773    3096 start.go:93] Provisioning new machine with config: &{Name:docker-flags-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:20.312809    3096 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:20.321234    3096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:20.337404    3096 start.go:159] libmachine.API.Create for "docker-flags-104000" (driver="qemu2")
	I0906 12:23:20.337429    3096 client.go:168] LocalClient.Create starting
	I0906 12:23:20.337493    3096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:20.337521    3096 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:20.337532    3096 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:20.337574    3096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:20.337593    3096 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:20.337600    3096 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:20.337908    3096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:20.455078    3096 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:20.530755    3096 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:20.530762    3096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:20.530919    3096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:20.539512    3096 main.go:141] libmachine: STDOUT: 
	I0906 12:23:20.539526    3096 main.go:141] libmachine: STDERR: 
	I0906 12:23:20.539567    3096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2 +20000M
	I0906 12:23:20.546721    3096 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:20.546735    3096 main.go:141] libmachine: STDERR: 
	I0906 12:23:20.546749    3096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:20.546754    3096 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:20.546792    3096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:2d:c2:e5:f6:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:20.548329    3096 main.go:141] libmachine: STDOUT: 
	I0906 12:23:20.548340    3096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:20.548360    3096 client.go:171] LocalClient.Create took 210.929667ms
	I0906 12:23:22.550459    3096 start.go:128] duration metric: createHost completed in 2.237690875s
	I0906 12:23:22.550547    3096 start.go:83] releasing machines lock for "docker-flags-104000", held for 2.237811875s
	W0906 12:23:22.550617    3096 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:22.569953    3096 out.go:177] * Deleting "docker-flags-104000" in qemu2 ...
	W0906 12:23:22.585890    3096 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:22.585910    3096 start.go:687] Will try again in 5 seconds ...
	I0906 12:23:27.588133    3096 start.go:365] acquiring machines lock for docker-flags-104000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:27.730772    3096 start.go:369] acquired machines lock for "docker-flags-104000" in 142.497625ms
	I0906 12:23:27.730962    3096 start.go:93] Provisioning new machine with config: &{Name:docker-flags-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:27.731227    3096 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:27.741961    3096 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:27.788348    3096 start.go:159] libmachine.API.Create for "docker-flags-104000" (driver="qemu2")
	I0906 12:23:27.788381    3096 client.go:168] LocalClient.Create starting
	I0906 12:23:27.788528    3096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:27.788579    3096 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:27.788601    3096 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:27.788681    3096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:27.788715    3096 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:27.788730    3096 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:27.789248    3096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:27.918337    3096 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:28.009378    3096 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:28.009388    3096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:28.009530    3096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:28.018088    3096 main.go:141] libmachine: STDOUT: 
	I0906 12:23:28.018101    3096 main.go:141] libmachine: STDERR: 
	I0906 12:23:28.018174    3096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2 +20000M
	I0906 12:23:28.025335    3096 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:28.025346    3096 main.go:141] libmachine: STDERR: 
	I0906 12:23:28.025360    3096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:28.025370    3096 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:28.025428    3096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:96:d7:15:d1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/docker-flags-104000/disk.qcow2
	I0906 12:23:28.026980    3096 main.go:141] libmachine: STDOUT: 
	I0906 12:23:28.026992    3096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:28.027004    3096 client.go:171] LocalClient.Create took 238.624916ms
	I0906 12:23:30.029235    3096 start.go:128] duration metric: createHost completed in 2.297962041s
	I0906 12:23:30.029318    3096 start.go:83] releasing machines lock for "docker-flags-104000", held for 2.29857325s
	W0906 12:23:30.029757    3096 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:30.040389    3096 out.go:177] 
	W0906 12:23:30.045325    3096 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:23:30.045359    3096 out.go:239] * 
	* 
	W0906 12:23:30.047851    3096 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:30.057271    3096 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-104000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-104000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-104000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (77.117375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-104000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-104000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-104000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-104000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-104000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-104000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.470084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-104000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-104000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-104000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-104000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-09-06 12:23:30.19417 -0700 PDT m=+849.459128709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-104000 -n docker-flags-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-104000 -n docker-flags-104000: exit status 7 (28.3455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-104000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-104000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-104000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (11.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-267000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-267000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.361890625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-267000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-267000 in cluster force-systemd-flag-267000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-267000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:23:13.541340    3074 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:23:13.541436    3074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:13.541439    3074 out.go:309] Setting ErrFile to fd 2...
	I0906 12:23:13.541445    3074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:13.541563    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:23:13.542568    3074 out.go:303] Setting JSON to false
	I0906 12:23:13.557653    3074 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1367,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:23:13.557725    3074 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:23:13.563288    3074 out.go:177] * [force-systemd-flag-267000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:23:13.574214    3074 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:23:13.578292    3074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:23:13.574291    3074 notify.go:220] Checking for updates...
	I0906 12:23:13.584285    3074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:23:13.587301    3074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:23:13.590307    3074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:23:13.593258    3074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:23:13.596518    3074 config.go:182] Loaded profile config "force-systemd-env-834000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:13.596585    3074 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:13.596620    3074 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:23:13.600272    3074 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:23:13.607247    3074 start.go:298] selected driver: qemu2
	I0906 12:23:13.607252    3074 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:23:13.607258    3074 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:23:13.609204    3074 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:23:13.613279    3074 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:23:13.616376    3074 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:23:13.616409    3074 cni.go:84] Creating CNI manager for ""
	I0906 12:23:13.616417    3074 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:23:13.616427    3074 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:23:13.616433    3074 start_flags.go:321] config:
	{Name:force-systemd-flag-267000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-267000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:23:13.620629    3074 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:23:13.627140    3074 out.go:177] * Starting control plane node force-systemd-flag-267000 in cluster force-systemd-flag-267000
	I0906 12:23:13.631236    3074 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:23:13.631257    3074 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:23:13.631271    3074 cache.go:57] Caching tarball of preloaded images
	I0906 12:23:13.631339    3074 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:23:13.631344    3074 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:23:13.631396    3074 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/force-systemd-flag-267000/config.json ...
	I0906 12:23:13.631408    3074 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/force-systemd-flag-267000/config.json: {Name:mk4e14a60ce38e732b4903abddc747873e314d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:23:13.631613    3074 start.go:365] acquiring machines lock for force-systemd-flag-267000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:13.631646    3074 start.go:369] acquired machines lock for "force-systemd-flag-267000" in 23.25µs
	I0906 12:23:13.631658    3074 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-267000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:13.631694    3074 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:13.639277    3074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:13.654989    3074 start.go:159] libmachine.API.Create for "force-systemd-flag-267000" (driver="qemu2")
	I0906 12:23:13.655006    3074 client.go:168] LocalClient.Create starting
	I0906 12:23:13.655082    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:13.655111    3074 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:13.655121    3074 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:13.655154    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:13.655173    3074 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:13.655180    3074 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:13.655489    3074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:13.772682    3074 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:13.847708    3074 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:13.847714    3074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:13.847856    3074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:13.856372    3074 main.go:141] libmachine: STDOUT: 
	I0906 12:23:13.856387    3074 main.go:141] libmachine: STDERR: 
	I0906 12:23:13.856430    3074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2 +20000M
	I0906 12:23:13.863612    3074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:13.863625    3074 main.go:141] libmachine: STDERR: 
	I0906 12:23:13.863646    3074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:13.863651    3074 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:13.863685    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:0f:95:6a:8e:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:13.865209    3074 main.go:141] libmachine: STDOUT: 
	I0906 12:23:13.865222    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:13.865239    3074 client.go:171] LocalClient.Create took 210.233917ms
	I0906 12:23:15.867520    3074 start.go:128] duration metric: createHost completed in 2.235872084s
	I0906 12:23:15.867565    3074 start.go:83] releasing machines lock for "force-systemd-flag-267000", held for 2.235966416s
	W0906 12:23:15.867619    3074 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:15.877739    3074 out.go:177] * Deleting "force-systemd-flag-267000" in qemu2 ...
	W0906 12:23:15.899312    3074 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:15.899343    3074 start.go:687] Will try again in 5 seconds ...
	I0906 12:23:20.901466    3074 start.go:365] acquiring machines lock for force-systemd-flag-267000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:22.550785    3074 start.go:369] acquired machines lock for "force-systemd-flag-267000" in 1.649187833s
	I0906 12:23:22.550929    3074 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-267000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-267000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:22.551309    3074 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:22.560937    3074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:22.607522    3074 start.go:159] libmachine.API.Create for "force-systemd-flag-267000" (driver="qemu2")
	I0906 12:23:22.607554    3074 client.go:168] LocalClient.Create starting
	I0906 12:23:22.607669    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:22.607735    3074 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:22.607755    3074 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:22.607819    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:22.607852    3074 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:22.607866    3074 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:22.608344    3074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:22.736917    3074 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:22.816014    3074 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:22.816023    3074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:22.816166    3074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:22.824820    3074 main.go:141] libmachine: STDOUT: 
	I0906 12:23:22.824833    3074 main.go:141] libmachine: STDERR: 
	I0906 12:23:22.824899    3074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2 +20000M
	I0906 12:23:22.832089    3074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:22.832103    3074 main.go:141] libmachine: STDERR: 
	I0906 12:23:22.832120    3074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:22.832126    3074 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:22.832158    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:82:89:16:8f:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-flag-267000/disk.qcow2
	I0906 12:23:22.833659    3074 main.go:141] libmachine: STDOUT: 
	I0906 12:23:22.833669    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:22.833681    3074 client.go:171] LocalClient.Create took 226.123792ms
	I0906 12:23:24.836063    3074 start.go:128] duration metric: createHost completed in 2.284772959s
	I0906 12:23:24.836227    3074 start.go:83] releasing machines lock for "force-systemd-flag-267000", held for 2.285431875s
	W0906 12:23:24.836620    3074 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-267000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-267000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:24.847463    3074 out.go:177] 
	W0906 12:23:24.851179    3074 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:23:24.851202    3074 out.go:239] * 
	* 
	W0906 12:23:24.853798    3074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:24.863167    3074 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-267000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-267000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-267000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.678166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-267000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-267000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-06 12:23:24.957101 -0700 PDT m=+844.221918626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-267000 -n force-systemd-flag-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-267000 -n force-systemd-flag-267000: exit status 7 (32.517959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-267000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-267000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-267000
--- FAIL: TestForceSystemdFlag (11.57s)

                                                
                                    
x
+
TestForceSystemdEnv (9.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-834000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
E0906 12:23:12.752369    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-834000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.687223625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-834000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-834000 in cluster force-systemd-env-834000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:23:10.332349    3055 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:23:10.332466    3055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:10.332469    3055 out.go:309] Setting ErrFile to fd 2...
	I0906 12:23:10.332472    3055 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:10.332578    3055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:23:10.333584    3055 out.go:303] Setting JSON to false
	I0906 12:23:10.349600    3055 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1364,"bootTime":1694026826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:23:10.349672    3055 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:23:10.357111    3055 out.go:177] * [force-systemd-env-834000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:23:10.361108    3055 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:23:10.365045    3055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:23:10.361204    3055 notify.go:220] Checking for updates...
	I0906 12:23:10.369043    3055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:23:10.372073    3055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:23:10.375063    3055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:23:10.378089    3055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0906 12:23:10.381465    3055 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:23:10.381517    3055 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:23:10.385079    3055 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:23:10.392040    3055 start.go:298] selected driver: qemu2
	I0906 12:23:10.392047    3055 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:23:10.392053    3055 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:23:10.393935    3055 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:23:10.397116    3055 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:23:10.400097    3055 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:23:10.400115    3055 cni.go:84] Creating CNI manager for ""
	I0906 12:23:10.400121    3055 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:23:10.400124    3055 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:23:10.400130    3055 start_flags.go:321] config:
	{Name:force-systemd-env-834000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-834000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:23:10.403904    3055 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:23:10.410894    3055 out.go:177] * Starting control plane node force-systemd-env-834000 in cluster force-systemd-env-834000
	I0906 12:23:10.415049    3055 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:23:10.415065    3055 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:23:10.415076    3055 cache.go:57] Caching tarball of preloaded images
	I0906 12:23:10.415113    3055 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:23:10.415118    3055 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:23:10.415167    3055 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/force-systemd-env-834000/config.json ...
	I0906 12:23:10.415177    3055 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/force-systemd-env-834000/config.json: {Name:mk3ed49fb6a64e1f2d25111a2fe208778aa24c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:23:10.415392    3055 start.go:365] acquiring machines lock for force-systemd-env-834000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:10.415426    3055 start.go:369] acquired machines lock for "force-systemd-env-834000" in 27.5µs
	I0906 12:23:10.415437    3055 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-834000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-834000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:10.415463    3055 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:10.423016    3055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:10.437018    3055 start.go:159] libmachine.API.Create for "force-systemd-env-834000" (driver="qemu2")
	I0906 12:23:10.437046    3055 client.go:168] LocalClient.Create starting
	I0906 12:23:10.437107    3055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:10.437130    3055 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:10.437146    3055 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:10.437175    3055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:10.437192    3055 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:10.437200    3055 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:10.437684    3055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:10.553047    3055 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:10.643394    3055 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:10.643403    3055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:10.643559    3055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:10.652461    3055 main.go:141] libmachine: STDOUT: 
	I0906 12:23:10.652476    3055 main.go:141] libmachine: STDERR: 
	I0906 12:23:10.652541    3055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2 +20000M
	I0906 12:23:10.660145    3055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:10.660160    3055 main.go:141] libmachine: STDERR: 
	I0906 12:23:10.660180    3055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:10.660191    3055 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:10.660230    3055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:59:38:9c:cb:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:10.661834    3055 main.go:141] libmachine: STDOUT: 
	I0906 12:23:10.661846    3055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:10.661864    3055 client.go:171] LocalClient.Create took 224.819458ms
	I0906 12:23:12.664027    3055 start.go:128] duration metric: createHost completed in 2.248595875s
	I0906 12:23:12.664096    3055 start.go:83] releasing machines lock for "force-systemd-env-834000", held for 2.248721541s
	W0906 12:23:12.664203    3055 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:12.671607    3055 out.go:177] * Deleting "force-systemd-env-834000" in qemu2 ...
	W0906 12:23:12.695569    3055 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:12.695599    3055 start.go:687] Will try again in 5 seconds ...
	I0906 12:23:17.697674    3055 start.go:365] acquiring machines lock for force-systemd-env-834000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:17.698170    3055 start.go:369] acquired machines lock for "force-systemd-env-834000" in 384.958µs
	I0906 12:23:17.698334    3055 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-834000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-834000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:23:17.698708    3055 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:23:17.704383    3055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:23:17.750680    3055 start.go:159] libmachine.API.Create for "force-systemd-env-834000" (driver="qemu2")
	I0906 12:23:17.750729    3055 client.go:168] LocalClient.Create starting
	I0906 12:23:17.750858    3055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:23:17.750910    3055 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:17.750931    3055 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:17.751017    3055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:23:17.751053    3055 main.go:141] libmachine: Decoding PEM data...
	I0906 12:23:17.751079    3055 main.go:141] libmachine: Parsing certificate...
	I0906 12:23:17.751645    3055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:23:17.879163    3055 main.go:141] libmachine: Creating SSH key...
	I0906 12:23:17.930584    3055 main.go:141] libmachine: Creating Disk image...
	I0906 12:23:17.930589    3055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:23:17.930730    3055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:17.939223    3055 main.go:141] libmachine: STDOUT: 
	I0906 12:23:17.939235    3055 main.go:141] libmachine: STDERR: 
	I0906 12:23:17.939301    3055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2 +20000M
	I0906 12:23:17.946398    3055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:23:17.946410    3055 main.go:141] libmachine: STDERR: 
	I0906 12:23:17.946424    3055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:17.946430    3055 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:23:17.946476    3055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:a9:6e:75:cb:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/force-systemd-env-834000/disk.qcow2
	I0906 12:23:17.947979    3055 main.go:141] libmachine: STDOUT: 
	I0906 12:23:17.947990    3055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:23:17.948001    3055 client.go:171] LocalClient.Create took 197.271583ms
	I0906 12:23:19.950112    3055 start.go:128] duration metric: createHost completed in 2.251436875s
	I0906 12:23:19.950201    3055 start.go:83] releasing machines lock for "force-systemd-env-834000", held for 2.252068833s
	W0906 12:23:19.950682    3055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:23:19.960240    3055 out.go:177] 
	W0906 12:23:19.965303    3055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:23:19.965342    3055 out.go:239] * 
	* 
	W0906 12:23:19.967980    3055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:19.977262    3055 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-834000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-834000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-834000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.2015ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-834000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-834000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-06 12:23:20.072391 -0700 PDT m=+839.337076793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-834000 -n force-systemd-env-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-834000 -n force-systemd-env-834000: exit status 7 (32.670459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-834000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-834000
--- FAIL: TestForceSystemdEnv (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-779000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-779000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2rh64" [a9da556c-351b-4b07-9214-169bda88f9f9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-2rh64" [a9da556c-351b-4b07-9214-169bda88f9f9] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.01961s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:30391
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:30391: Get "http://192.168.105.4:30391": dial tcp 192.168.105.4:30391: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-779000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-2rh64
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-779000/192.168.105.4
Start Time:       Wed, 06 Sep 2023 12:14:20 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://1ce0e6c38ab01eaa6ab7cda958e2e491faefa82aae24e0cf29a5352ce041d6da
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 06 Sep 2023 12:14:37 -0700
Finished:     Wed, 06 Sep 2023 12:14:37 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 06 Sep 2023 12:14:21 -0700
Finished:     Wed, 06 Sep 2023 12:14:21 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cm8jb (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-cm8jb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-2rh64 to functional-779000
Normal   Pulled     13s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    13s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    13s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    13s (x2 over 28s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-2rh64_default(a9da556c-351b-4b07-9214-169bda88f9f9)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-779000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-779000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.166.99
IPs:                      10.105.166.99
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30391/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-779000 -n functional-779000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-779000 addons list                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-779000 service                                                                                            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount   | -p functional-779000                                                                                                 | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh -- ls                                                                                          | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh cat                                                                                            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | /mount-9p/test-1694027678816282000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh stat                                                                                           | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh stat                                                                                           | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh sudo                                                                                           | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-779000                                                                                                 | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2176480214/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh -- ls                                                                                          | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh sudo                                                                                           | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-779000                                                                                                 | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-779000                                                                                                 | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-779000                                                                                                 | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-779000 ssh findmnt                                                                                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:13:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:13:17.903669    1773 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:13:17.903788    1773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:13:17.903790    1773 out.go:309] Setting ErrFile to fd 2...
	I0906 12:13:17.903792    1773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:13:17.903899    1773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:13:17.904896    1773 out.go:303] Setting JSON to false
	I0906 12:13:17.920410    1773 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":771,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:13:17.920467    1773 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:13:17.924654    1773 out.go:177] * [functional-779000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:13:17.932541    1773 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:13:17.932644    1773 notify.go:220] Checking for updates...
	I0906 12:13:17.936344    1773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:13:17.940440    1773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:13:17.944521    1773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:13:17.947443    1773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:13:17.950549    1773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:13:17.954495    1773 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:13:17.954673    1773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:13:17.957419    1773 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:13:17.964559    1773 start.go:298] selected driver: qemu2
	I0906 12:13:17.964561    1773 start.go:902] validating driver "qemu2" against &{Name:functional-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:13:17.964609    1773 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:13:17.966459    1773 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:13:17.966482    1773 cni.go:84] Creating CNI manager for ""
	I0906 12:13:17.966487    1773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:13:17.966491    1773 start_flags.go:321] config:
	{Name:functional-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-779000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:13:17.970119    1773 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:13:17.978469    1773 out.go:177] * Starting control plane node functional-779000 in cluster functional-779000
	I0906 12:13:17.982538    1773 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:13:17.982564    1773 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:13:17.982584    1773 cache.go:57] Caching tarball of preloaded images
	I0906 12:13:17.982643    1773 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:13:17.982646    1773 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:13:17.982696    1773 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/config.json ...
	I0906 12:13:17.982976    1773 start.go:365] acquiring machines lock for functional-779000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:13:17.983001    1773 start.go:369] acquired machines lock for "functional-779000" in 20.875µs
	I0906 12:13:17.983008    1773 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:13:17.983011    1773 fix.go:54] fixHost starting: 
	I0906 12:13:17.983564    1773 fix.go:102] recreateIfNeeded on functional-779000: state=Running err=<nil>
	W0906 12:13:17.983571    1773 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:13:17.987464    1773 out.go:177] * Updating the running qemu2 "functional-779000" VM ...
	I0906 12:13:17.995475    1773 machine.go:88] provisioning docker machine ...
	I0906 12:13:17.995484    1773 buildroot.go:166] provisioning hostname "functional-779000"
	I0906 12:13:17.995519    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:17.995765    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:17.995770    1773 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-779000 && echo "functional-779000" | sudo tee /etc/hostname
	I0906 12:13:18.061731    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-779000
	
	I0906 12:13:18.061772    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:18.062003    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:18.062009    1773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-779000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-779000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-779000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:13:18.123368    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:13:18.123381    1773 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17116-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17116-1006/.minikube}
	I0906 12:13:18.123391    1773 buildroot.go:174] setting up certificates
	I0906 12:13:18.123397    1773 provision.go:83] configureAuth start
	I0906 12:13:18.123399    1773 provision.go:138] copyHostCerts
	I0906 12:13:18.123480    1773 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem, removing ...
	I0906 12:13:18.123483    1773 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem
	I0906 12:13:18.123582    1773 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem (1078 bytes)
	I0906 12:13:18.123738    1773 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem, removing ...
	I0906 12:13:18.123739    1773 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem
	I0906 12:13:18.123787    1773 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem (1123 bytes)
	I0906 12:13:18.123882    1773 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem, removing ...
	I0906 12:13:18.123883    1773 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem
	I0906 12:13:18.124003    1773 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem (1679 bytes)
	I0906 12:13:18.124102    1773 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem org=jenkins.functional-779000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-779000]
	I0906 12:13:18.232736    1773 provision.go:172] copyRemoteCerts
	I0906 12:13:18.232799    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:13:18.232806    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:18.267657    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:13:18.274853    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0906 12:13:18.281454    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:13:18.287920    1773 provision.go:86] duration metric: configureAuth took 164.519875ms
	I0906 12:13:18.287925    1773 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:13:18.288037    1773 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:13:18.288081    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:18.288292    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:18.288295    1773 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:13:18.350413    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:13:18.350417    1773 buildroot.go:70] root file system type: tmpfs
	I0906 12:13:18.350472    1773 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:13:18.350538    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:18.350774    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:18.350807    1773 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:13:18.417091    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:13:18.417147    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:18.417399    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:18.417406    1773 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:13:18.480405    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:13:18.480411    1773 machine.go:91] provisioned docker machine in 484.936333ms
	I0906 12:13:18.480415    1773 start.go:300] post-start starting for "functional-779000" (driver="qemu2")
	I0906 12:13:18.480419    1773 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:13:18.480461    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:13:18.480467    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:18.513589    1773 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:13:18.515034    1773 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:13:18.515042    1773 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/addons for local assets ...
	I0906 12:13:18.515116    1773 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/files for local assets ...
	I0906 12:13:18.515215    1773 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem -> 14212.pem in /etc/ssl/certs
	I0906 12:13:18.515311    1773 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/test/nested/copy/1421/hosts -> hosts in /etc/test/nested/copy/1421
	I0906 12:13:18.515337    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1421
	I0906 12:13:18.518109    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:13:18.525059    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/test/nested/copy/1421/hosts --> /etc/test/nested/copy/1421/hosts (40 bytes)
	I0906 12:13:18.531654    1773 start.go:303] post-start completed in 51.234958ms
	I0906 12:13:18.531658    1773 fix.go:56] fixHost completed within 548.652958ms
	I0906 12:13:18.531704    1773 main.go:141] libmachine: Using SSH client type: native
	I0906 12:13:18.531943    1773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005ee3b0] 0x1005f0e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 12:13:18.531946    1773 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 12:13:18.592182    1773 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694027598.580955805
	
	I0906 12:13:18.592185    1773 fix.go:206] guest clock: 1694027598.580955805
	I0906 12:13:18.592188    1773 fix.go:219] Guest: 2023-09-06 12:13:18.580955805 -0700 PDT Remote: 2023-09-06 12:13:18.531659 -0700 PDT m=+0.647381334 (delta=49.296805ms)
	I0906 12:13:18.592198    1773 fix.go:190] guest clock delta is within tolerance: 49.296805ms
	I0906 12:13:18.592199    1773 start.go:83] releasing machines lock for "functional-779000", held for 609.200875ms
	I0906 12:13:18.592474    1773 ssh_runner.go:195] Run: cat /version.json
	I0906 12:13:18.592480    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:18.592488    1773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:13:18.592504    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:18.666545    1773 ssh_runner.go:195] Run: systemctl --version
	I0906 12:13:18.668461    1773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:13:18.670186    1773 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:13:18.670211    1773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:13:18.673159    1773 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 12:13:18.673163    1773 start.go:466] detecting cgroup driver to use...
	I0906 12:13:18.673215    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:13:18.678421    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 12:13:18.681345    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:13:18.684716    1773 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:13:18.684743    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:13:18.688381    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:13:18.691796    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:13:18.694982    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:13:18.697988    1773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:13:18.700867    1773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:13:18.704193    1773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:13:18.706638    1773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:13:18.709222    1773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:13:18.813597    1773 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:13:18.819594    1773 start.go:466] detecting cgroup driver to use...
	I0906 12:13:18.819628    1773 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:13:18.826844    1773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:13:18.832671    1773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:13:18.838950    1773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:13:18.843852    1773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:13:18.848560    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:13:18.853782    1773 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:13:18.855096    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:13:18.857704    1773 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:13:18.862234    1773 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:13:18.960884    1773 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:13:19.060808    1773 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:13:19.060817    1773 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 12:13:19.065803    1773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:13:19.165542    1773 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:13:30.469370    1773 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.3038985s)
	I0906 12:13:30.469435    1773 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:13:30.553969    1773 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:13:30.634696    1773 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:13:30.724474    1773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:13:30.808253    1773 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:13:30.816385    1773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:13:30.913500    1773 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 12:13:30.939459    1773 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:13:30.939526    1773 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:13:30.941537    1773 start.go:534] Will wait 60s for crictl version
	I0906 12:13:30.941593    1773 ssh_runner.go:195] Run: which crictl
	I0906 12:13:30.943026    1773 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:13:30.954464    1773 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0906 12:13:30.954535    1773 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:13:30.962192    1773 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:13:30.972664    1773 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0906 12:13:30.972751    1773 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 12:13:30.978758    1773 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0906 12:13:30.981751    1773 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:13:30.981810    1773 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:13:30.987761    1773 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-779000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0906 12:13:30.987769    1773 docker.go:566] Images already preloaded, skipping extraction
	I0906 12:13:30.987819    1773 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:13:30.997383    1773 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-779000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0906 12:13:30.997388    1773 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:13:30.997436    1773 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:13:31.004980    1773 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0906 12:13:31.005001    1773 cni.go:84] Creating CNI manager for ""
	I0906 12:13:31.005005    1773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:13:31.005009    1773 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 12:13:31.005017    1773 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-779000 NodeName:functional-779000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:13:31.005082    1773 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-779000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:13:31.005111    1773 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-779000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0906 12:13:31.005177    1773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 12:13:31.008384    1773 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:13:31.008414    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:13:31.011509    1773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0906 12:13:31.016683    1773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:13:31.021576    1773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0906 12:13:31.026414    1773 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0906 12:13:31.027808    1773 certs.go:56] Setting up /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000 for IP: 192.168.105.4
	I0906 12:13:31.027814    1773 certs.go:190] acquiring lock for shared ca certs: {Name:mk2fda2e4681223badcda373e6897c8a04d70962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:13:31.027937    1773 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key
	I0906 12:13:31.027979    1773 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key
	I0906 12:13:31.028029    1773 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.key
	I0906 12:13:31.028073    1773 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/apiserver.key.942c473b
	I0906 12:13:31.028110    1773 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/proxy-client.key
	I0906 12:13:31.028248    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem (1338 bytes)
	W0906 12:13:31.028270    1773 certs.go:433] ignoring /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421_empty.pem, impossibly tiny 0 bytes
	I0906 12:13:31.028276    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:13:31.028295    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:13:31.028314    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:13:31.028330    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem (1679 bytes)
	I0906 12:13:31.028372    1773 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:13:31.028736    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 12:13:31.036339    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:13:31.043661    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:13:31.051204    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:13:31.058076    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:13:31.064852    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:13:31.072236    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:13:31.079548    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:13:31.086708    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:13:31.093593    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem --> /usr/share/ca-certificates/1421.pem (1338 bytes)
	I0906 12:13:31.100283    1773 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /usr/share/ca-certificates/14212.pem (1708 bytes)
	I0906 12:13:31.108407    1773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:13:31.113628    1773 ssh_runner.go:195] Run: openssl version
	I0906 12:13:31.115433    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:13:31.118443    1773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:13:31.119856    1773 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:10 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:13:31.119875    1773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:13:31.121810    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:13:31.124560    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1421.pem && ln -fs /usr/share/ca-certificates/1421.pem /etc/ssl/certs/1421.pem"
	I0906 12:13:31.127954    1773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1421.pem
	I0906 12:13:31.129600    1773 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 19:11 /usr/share/ca-certificates/1421.pem
	I0906 12:13:31.129622    1773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1421.pem
	I0906 12:13:31.131340    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1421.pem /etc/ssl/certs/51391683.0"
	I0906 12:13:31.134033    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14212.pem && ln -fs /usr/share/ca-certificates/14212.pem /etc/ssl/certs/14212.pem"
	I0906 12:13:31.136942    1773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14212.pem
	I0906 12:13:31.138403    1773 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 19:11 /usr/share/ca-certificates/14212.pem
	I0906 12:13:31.138422    1773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14212.pem
	I0906 12:13:31.140180    1773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14212.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:13:31.143409    1773 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 12:13:31.144926    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:13:31.146905    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:13:31.148625    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:13:31.150594    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:13:31.152325    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:13:31.154264    1773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:13:31.155989    1773 kubeadm.go:404] StartCluster: {Name:functional-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.1 ClusterName:functional-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:13:31.156064    1773 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:13:31.161714    1773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:13:31.164732    1773 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0906 12:13:31.164741    1773 kubeadm.go:636] restartCluster start
	I0906 12:13:31.164764    1773 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:13:31.167955    1773 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:13:31.168246    1773 kubeconfig.go:92] found "functional-779000" server: "https://192.168.105.4:8441"
	I0906 12:13:31.168961    1773 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:13:31.172113    1773 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0906 12:13:31.172116    1773 kubeadm.go:1128] stopping kube-system containers ...
	I0906 12:13:31.172154    1773 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:13:31.179261    1773 docker.go:462] Stopping containers: [ea59574d201a 5f7c382d194a 3972f5ac3c36 6f287df4dec1 5a100a49eedb fa1f611fa0a2 d7fd69da5c47 c621621619ad ac910b99d3a3 cc62d84f5d5a d63f4ad1bf9c 8f227b9527d9 68dfe693eaa1 c61e51492422 524d9b48f8b9 42cfb3ffea86 8794fc3b2b77 f20d2357bcc1 4739bae89d47 d4521fa5076e f81a2d91c6f3 9ecc956e7563 4e6d5e8cbade 425f56947c18 7642e19f9d90 09af7836796e c41080fdbf04 2eb0429d1593 1d3e6d89654e]
	I0906 12:13:31.179313    1773 ssh_runner.go:195] Run: docker stop ea59574d201a 5f7c382d194a 3972f5ac3c36 6f287df4dec1 5a100a49eedb fa1f611fa0a2 d7fd69da5c47 c621621619ad ac910b99d3a3 cc62d84f5d5a d63f4ad1bf9c 8f227b9527d9 68dfe693eaa1 c61e51492422 524d9b48f8b9 42cfb3ffea86 8794fc3b2b77 f20d2357bcc1 4739bae89d47 d4521fa5076e f81a2d91c6f3 9ecc956e7563 4e6d5e8cbade 425f56947c18 7642e19f9d90 09af7836796e c41080fdbf04 2eb0429d1593 1d3e6d89654e
	I0906 12:13:31.185495    1773 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:13:31.282569    1773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:13:31.287307    1773 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep  6 19:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep  6 19:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep  6 19:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep  6 19:11 /etc/kubernetes/scheduler.conf
	
	I0906 12:13:31.287348    1773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0906 12:13:31.291637    1773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0906 12:13:31.295654    1773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0906 12:13:31.299406    1773 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:13:31.299431    1773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:13:31.302937    1773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0906 12:13:31.305821    1773 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:13:31.305842    1773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:13:31.308767    1773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:13:31.311796    1773 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0906 12:13:31.311799    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:31.333215    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:32.259028    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:32.378313    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:32.404930    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:32.432467    1773 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:13:32.432533    1773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:13:32.437475    1773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:13:32.947677    1773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:13:33.447651    1773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:13:33.451911    1773 api_server.go:72] duration metric: took 1.019452959s to wait for apiserver process to appear ...
	I0906 12:13:33.451916    1773 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:13:33.451923    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:35.000950    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:13:35.000959    1773 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:13:35.000964    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:35.008383    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:13:35.008389    1773 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:13:35.510420    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:35.513636    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 12:13:35.513642    1773 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 12:13:36.010433    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:36.013611    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0906 12:13:36.013616    1773 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0906 12:13:36.510418    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:36.513948    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0906 12:13:36.519460    1773 api_server.go:141] control plane version: v1.28.1
	I0906 12:13:36.519465    1773 api_server.go:131] duration metric: took 3.067569291s to wait for apiserver health ...
	I0906 12:13:36.519468    1773 cni.go:84] Creating CNI manager for ""
	I0906 12:13:36.519478    1773 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:13:36.523650    1773 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:13:36.526718    1773 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:13:36.529808    1773 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 12:13:36.534671    1773 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:13:36.539187    1773 system_pods.go:59] 7 kube-system pods found
	I0906 12:13:36.539195    1773 system_pods.go:61] "coredns-5dd5756b68-7mpt6" [99ffb644-5c31-4397-a41d-123146bc7822] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:13:36.539199    1773 system_pods.go:61] "etcd-functional-779000" [1bbd8174-7114-46f4-840c-cadd830fb7bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 12:13:36.539201    1773 system_pods.go:61] "kube-apiserver-functional-779000" [e6d62d2a-145d-4fc5-b2c9-0acdc449ffc5] Pending
	I0906 12:13:36.539205    1773 system_pods.go:61] "kube-controller-manager-functional-779000" [2354cb60-5f4a-413f-87bc-42a40e4e59a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:13:36.539207    1773 system_pods.go:61] "kube-proxy-9l64f" [17d9dd7b-62cc-45d0-807c-bcf97e1f17b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:13:36.539210    1773 system_pods.go:61] "kube-scheduler-functional-779000" [b4229f91-4d9b-4cfa-b561-df5186739939] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 12:13:36.539212    1773 system_pods.go:61] "storage-provisioner" [8ba268d9-d706-43b1-b613-105f8077cb20] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:13:36.539214    1773 system_pods.go:74] duration metric: took 4.540292ms to wait for pod list to return data ...
	I0906 12:13:36.539216    1773 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:13:36.540710    1773 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 12:13:36.540716    1773 node_conditions.go:123] node cpu capacity is 2
	I0906 12:13:36.540721    1773 node_conditions.go:105] duration metric: took 1.503333ms to run NodePressure ...
	I0906 12:13:36.540728    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:13:36.622540    1773 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0906 12:13:36.624859    1773 kubeadm.go:787] kubelet initialised
	I0906 12:13:36.624863    1773 kubeadm.go:788] duration metric: took 2.316625ms waiting for restarted kubelet to initialise ...
	I0906 12:13:36.624865    1773 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:13:36.627480    1773 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:36.629719    1773 pod_ready.go:92] pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:36.629722    1773 pod_ready.go:81] duration metric: took 2.238709ms waiting for pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:36.629725    1773 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:38.639218    1773 pod_ready.go:102] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"False"
	I0906 12:13:40.639349    1773 pod_ready.go:102] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"False"
	I0906 12:13:43.139004    1773 pod_ready.go:102] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"False"
	I0906 12:13:45.639118    1773 pod_ready.go:102] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"False"
	I0906 12:13:48.138383    1773 pod_ready.go:102] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"False"
	I0906 12:13:49.638748    1773 pod_ready.go:92] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:49.638754    1773 pod_ready.go:81] duration metric: took 13.009121375s waiting for pod "etcd-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.638757    1773 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.641014    1773 pod_ready.go:92] pod "kube-apiserver-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:49.641016    1773 pod_ready.go:81] duration metric: took 2.256708ms waiting for pod "kube-apiserver-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.641019    1773 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.643115    1773 pod_ready.go:92] pod "kube-controller-manager-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:49.643118    1773 pod_ready.go:81] duration metric: took 2.09675ms waiting for pod "kube-controller-manager-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.643121    1773 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9l64f" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.645418    1773 pod_ready.go:92] pod "kube-proxy-9l64f" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:49.645420    1773 pod_ready.go:81] duration metric: took 2.29725ms waiting for pod "kube-proxy-9l64f" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.645423    1773 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.647799    1773 pod_ready.go:92] pod "kube-scheduler-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:49.647802    1773 pod_ready.go:81] duration metric: took 2.377209ms waiting for pod "kube-scheduler-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:49.647807    1773 pod_ready.go:38] duration metric: took 13.023031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:13:49.647814    1773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:13:49.651286    1773 ops.go:34] apiserver oom_adj: -16
	I0906 12:13:49.651289    1773 kubeadm.go:640] restartCluster took 18.486680791s
	I0906 12:13:49.651291    1773 kubeadm.go:406] StartCluster complete in 18.495439583s
	I0906 12:13:49.651298    1773 settings.go:142] acquiring lock: {Name:mkdab5683cd98d968361f82dee37aa31492af7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:13:49.651374    1773 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:13:49.651683    1773 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/kubeconfig: {Name:mk69a76938a18011410dd32eccb7fee080824c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:13:49.651885    1773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 12:13:49.651916    1773 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 12:13:49.651947    1773 addons.go:69] Setting storage-provisioner=true in profile "functional-779000"
	I0906 12:13:49.651953    1773 addons.go:231] Setting addon storage-provisioner=true in "functional-779000"
	I0906 12:13:49.651953    1773 addons.go:69] Setting default-storageclass=true in profile "functional-779000"
	W0906 12:13:49.651955    1773 addons.go:240] addon storage-provisioner should already be in state true
	I0906 12:13:49.651959    1773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-779000"
	I0906 12:13:49.651978    1773 host.go:66] Checking if "functional-779000" exists ...
	I0906 12:13:49.651994    1773 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:13:49.656907    1773 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:13:49.660990    1773 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:13:49.660994    1773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:13:49.661002    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:49.661493    1773 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-779000" context rescaled to 1 replicas
	I0906 12:13:49.661505    1773 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:13:49.664922    1773 out.go:177] * Verifying Kubernetes components...
	I0906 12:13:49.663474    1773 addons.go:231] Setting addon default-storageclass=true in "functional-779000"
	W0906 12:13:49.672962    1773 addons.go:240] addon default-storageclass should already be in state true
	I0906 12:13:49.672977    1773 host.go:66] Checking if "functional-779000" exists ...
	I0906 12:13:49.673007    1773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:13:49.673689    1773 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:13:49.673692    1773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:13:49.673697    1773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
	I0906 12:13:49.696320    1773 start.go:880] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0906 12:13:49.696355    1773 node_ready.go:35] waiting up to 6m0s for node "functional-779000" to be "Ready" ...
	I0906 12:13:49.709844    1773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:13:49.717860    1773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:13:49.839578    1773 node_ready.go:49] node "functional-779000" has status "Ready":"True"
	I0906 12:13:49.839588    1773 node_ready.go:38] duration metric: took 143.223458ms waiting for node "functional-779000" to be "Ready" ...
	I0906 12:13:49.839592    1773 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:13:50.041084    1773 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:50.080083    1773 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 12:13:50.084007    1773 addons.go:502] enable addons completed in 432.094166ms: enabled=[default-storageclass storage-provisioner]
	I0906 12:13:50.439384    1773 pod_ready.go:92] pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:50.439390    1773 pod_ready.go:81] duration metric: took 398.301792ms waiting for pod "coredns-5dd5756b68-7mpt6" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:50.439394    1773 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:50.839181    1773 pod_ready.go:92] pod "etcd-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:50.839187    1773 pod_ready.go:81] duration metric: took 399.792708ms waiting for pod "etcd-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:50.839192    1773 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:51.239311    1773 pod_ready.go:92] pod "kube-apiserver-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:51.239318    1773 pod_ready.go:81] duration metric: took 400.126208ms waiting for pod "kube-apiserver-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:51.239323    1773 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:51.639435    1773 pod_ready.go:92] pod "kube-controller-manager-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:51.639440    1773 pod_ready.go:81] duration metric: took 400.117333ms waiting for pod "kube-controller-manager-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:51.639444    1773 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9l64f" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:52.039189    1773 pod_ready.go:92] pod "kube-proxy-9l64f" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:52.039194    1773 pod_ready.go:81] duration metric: took 399.750458ms waiting for pod "kube-proxy-9l64f" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:52.039198    1773 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:52.439325    1773 pod_ready.go:92] pod "kube-scheduler-functional-779000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:13:52.439331    1773 pod_ready.go:81] duration metric: took 400.133167ms waiting for pod "kube-scheduler-functional-779000" in "kube-system" namespace to be "Ready" ...
	I0906 12:13:52.439336    1773 pod_ready.go:38] duration metric: took 2.599759667s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:13:52.439350    1773 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:13:52.439452    1773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:13:52.444584    1773 api_server.go:72] duration metric: took 2.783087667s to wait for apiserver process to appear ...
	I0906 12:13:52.444588    1773 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:13:52.444593    1773 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 12:13:52.447900    1773 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0906 12:13:52.448478    1773 api_server.go:141] control plane version: v1.28.1
	I0906 12:13:52.448482    1773 api_server.go:131] duration metric: took 3.892667ms to wait for apiserver health ...
	I0906 12:13:52.448485    1773 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:13:52.640964    1773 system_pods.go:59] 7 kube-system pods found
	I0906 12:13:52.640971    1773 system_pods.go:61] "coredns-5dd5756b68-7mpt6" [99ffb644-5c31-4397-a41d-123146bc7822] Running
	I0906 12:13:52.640973    1773 system_pods.go:61] "etcd-functional-779000" [1bbd8174-7114-46f4-840c-cadd830fb7bd] Running
	I0906 12:13:52.640975    1773 system_pods.go:61] "kube-apiserver-functional-779000" [e6d62d2a-145d-4fc5-b2c9-0acdc449ffc5] Running
	I0906 12:13:52.640977    1773 system_pods.go:61] "kube-controller-manager-functional-779000" [2354cb60-5f4a-413f-87bc-42a40e4e59a9] Running
	I0906 12:13:52.640978    1773 system_pods.go:61] "kube-proxy-9l64f" [17d9dd7b-62cc-45d0-807c-bcf97e1f17b5] Running
	I0906 12:13:52.640980    1773 system_pods.go:61] "kube-scheduler-functional-779000" [b4229f91-4d9b-4cfa-b561-df5186739939] Running
	I0906 12:13:52.640981    1773 system_pods.go:61] "storage-provisioner" [8ba268d9-d706-43b1-b613-105f8077cb20] Running
	I0906 12:13:52.640983    1773 system_pods.go:74] duration metric: took 192.498542ms to wait for pod list to return data ...
	I0906 12:13:52.640986    1773 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:13:52.837919    1773 default_sa.go:45] found service account: "default"
	I0906 12:13:52.837924    1773 default_sa.go:55] duration metric: took 196.937542ms for default service account to be created ...
	I0906 12:13:52.837927    1773 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:13:53.040215    1773 system_pods.go:86] 7 kube-system pods found
	I0906 12:13:53.040221    1773 system_pods.go:89] "coredns-5dd5756b68-7mpt6" [99ffb644-5c31-4397-a41d-123146bc7822] Running
	I0906 12:13:53.040224    1773 system_pods.go:89] "etcd-functional-779000" [1bbd8174-7114-46f4-840c-cadd830fb7bd] Running
	I0906 12:13:53.040226    1773 system_pods.go:89] "kube-apiserver-functional-779000" [e6d62d2a-145d-4fc5-b2c9-0acdc449ffc5] Running
	I0906 12:13:53.040228    1773 system_pods.go:89] "kube-controller-manager-functional-779000" [2354cb60-5f4a-413f-87bc-42a40e4e59a9] Running
	I0906 12:13:53.040230    1773 system_pods.go:89] "kube-proxy-9l64f" [17d9dd7b-62cc-45d0-807c-bcf97e1f17b5] Running
	I0906 12:13:53.040231    1773 system_pods.go:89] "kube-scheduler-functional-779000" [b4229f91-4d9b-4cfa-b561-df5186739939] Running
	I0906 12:13:53.040233    1773 system_pods.go:89] "storage-provisioner" [8ba268d9-d706-43b1-b613-105f8077cb20] Running
	I0906 12:13:53.040235    1773 system_pods.go:126] duration metric: took 202.308167ms to wait for k8s-apps to be running ...
	I0906 12:13:53.040237    1773 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:13:53.040302    1773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:13:53.046274    1773 system_svc.go:56] duration metric: took 6.033875ms WaitForService to wait for kubelet.
	I0906 12:13:53.046280    1773 kubeadm.go:581] duration metric: took 3.384790208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 12:13:53.046289    1773 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:13:53.239820    1773 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 12:13:53.239826    1773 node_conditions.go:123] node cpu capacity is 2
	I0906 12:13:53.239833    1773 node_conditions.go:105] duration metric: took 193.543708ms to run NodePressure ...
	I0906 12:13:53.239839    1773 start.go:228] waiting for startup goroutines ...
	I0906 12:13:53.239842    1773 start.go:233] waiting for cluster config update ...
	I0906 12:13:53.239847    1773 start.go:242] writing updated cluster config ...
	I0906 12:13:53.240231    1773 ssh_runner.go:195] Run: rm -f paused
	I0906 12:13:53.269379    1773 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0906 12:13:53.273044    1773 out.go:177] * Done! kubectl is now configured to use "functional-779000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 19:11:23 UTC, ends at Wed 2023-09-06 19:14:50 UTC. --
	Sep 06 19:14:39 functional-779000 dockerd[7130]: time="2023-09-06T19:14:39.879641107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:39 functional-779000 dockerd[7130]: time="2023-09-06T19:14:39.879651607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:14:39 functional-779000 dockerd[7130]: time="2023-09-06T19:14:39.879656149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:39 functional-779000 cri-dockerd[7402]: time="2023-09-06T19:14:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:14:41 functional-779000 cri-dockerd[7402]: time="2023-09-06T19:14:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.075546946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.075576321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.075585988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.075592238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.123002510Z" level=info msg="shim disconnected" id=0cb5eaa0b475c20bc66a06a21910ff349ccbe41f00aacb342e2aa4f25ad21939 namespace=moby
	Sep 06 19:14:41 functional-779000 dockerd[7123]: time="2023-09-06T19:14:41.123145468Z" level=info msg="ignoring event" container=0cb5eaa0b475c20bc66a06a21910ff349ccbe41f00aacb342e2aa4f25ad21939 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.123370552Z" level=warning msg="cleaning up after shim disconnected" id=0cb5eaa0b475c20bc66a06a21910ff349ccbe41f00aacb342e2aa4f25ad21939 namespace=moby
	Sep 06 19:14:41 functional-779000 dockerd[7130]: time="2023-09-06T19:14:41.123380635Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:14:43 functional-779000 dockerd[7123]: time="2023-09-06T19:14:43.062478975Z" level=info msg="ignoring event" container=c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:14:43 functional-779000 dockerd[7130]: time="2023-09-06T19:14:43.062529266Z" level=info msg="shim disconnected" id=c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e namespace=moby
	Sep 06 19:14:43 functional-779000 dockerd[7130]: time="2023-09-06T19:14:43.062555933Z" level=warning msg="cleaning up after shim disconnected" id=c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e namespace=moby
	Sep 06 19:14:43 functional-779000 dockerd[7130]: time="2023-09-06T19:14:43.062560225Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.493255982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.493299607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.493313690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.493324398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:14:48 functional-779000 dockerd[7123]: time="2023-09-06T19:14:48.530941927Z" level=info msg="ignoring event" container=4a8f9b332bdec01f7082d82a1750fc38d4f4bc0cac5e68b5eba622bed2ffe9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.531020052Z" level=info msg="shim disconnected" id=4a8f9b332bdec01f7082d82a1750fc38d4f4bc0cac5e68b5eba622bed2ffe9dd namespace=moby
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.531045260Z" level=warning msg="cleaning up after shim disconnected" id=4a8f9b332bdec01f7082d82a1750fc38d4f4bc0cac5e68b5eba622bed2ffe9dd namespace=moby
	Sep 06 19:14:48 functional-779000 dockerd[7130]: time="2023-09-06T19:14:48.531049344Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	4a8f9b332bdec       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            3                   b4bde9d78a34e
	0cb5eaa0b475c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   c643611566c56
	1ce0e6c38ab01       72565bf5bbedf                                                                                         13 seconds ago       Exited              echoserver-arm            2                   d7718974a5062
	20610a4b6d46a       nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c                         18 seconds ago       Running             myfrontend                0                   52d42175614e2
	5faff3c16fe8b       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                         37 seconds ago       Running             nginx                     0                   83513104f377e
	6aa11675da16d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   cc44c59ff4ade
	c42788f564b64       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   cc44c59ff4ade
	5dfbf63507821       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   8a983a4887005
	a283c9fc4ca55       812f5241df7fd                                                                                         About a minute ago   Running             kube-proxy                2                   7a5cfdc51514e
	c69af285c1079       b29fb62480892                                                                                         About a minute ago   Running             kube-apiserver            0                   3d9c75cd55d4e
	f3377ae28e74a       b4a5a57e99492                                                                                         About a minute ago   Running             kube-scheduler            2                   cb9c722747fd3
	84bcf15289c8d       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   89925f7da7b01
	848ec6259eec6       8b6e1980b7584                                                                                         About a minute ago   Running             kube-controller-manager   2                   15bc96e148f2c
	5f7c382d194a9       9cdd6470f48c8                                                                                         About a minute ago   Exited              etcd                      1                   ac910b99d3a31
	3972f5ac3c36d       b4a5a57e99492                                                                                         About a minute ago   Exited              kube-scheduler            1                   68dfe693eaa15
	6f287df4dec12       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   c621621619ad7
	5a100a49eedb3       8b6e1980b7584                                                                                         About a minute ago   Exited              kube-controller-manager   1                   8f227b9527d95
	fa1f611fa0a23       812f5241df7fd                                                                                         About a minute ago   Exited              kube-proxy                1                   c61e51492422d
	
	* 
	* ==> coredns [5dfbf6350782] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37795 - 32338 "HINFO IN 7567481350485253325.991686005451927386. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.008025591s
	[INFO] 10.244.0.1:3465 - 24818 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000091583s
	[INFO] 10.244.0.1:16643 - 53464 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000093709s
	[INFO] 10.244.0.1:23732 - 64881 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000955374s
	[INFO] 10.244.0.1:53784 - 49025 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000066833s
	[INFO] 10.244.0.1:43118 - 1650 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000061958s
	[INFO] 10.244.0.1:8779 - 9948 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000121666s
	
	* 
	* ==> coredns [6f287df4dec1] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48637 - 57299 "HINFO IN 5561437223691523485.7833780892868913418. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00821415s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-779000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-779000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=functional-779000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T12_11_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 19:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-779000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 19:14:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 19:14:36 +0000   Wed, 06 Sep 2023 19:11:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 19:14:36 +0000   Wed, 06 Sep 2023 19:11:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 19:14:36 +0000   Wed, 06 Sep 2023 19:11:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 19:14:36 +0000   Wed, 06 Sep 2023 19:11:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-779000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 01ce828f6fa240cab6168f4a5422bed0
	  System UUID:                01ce828f6fa240cab6168f4a5422bed0
	  Boot ID:                    29b60e16-d1dd-4f64-b733-9d95235b625d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-bfvfs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  default                     hello-node-connect-7799dfb7c6-2rh64          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-5dd5756b68-7mpt6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m56s
	  kube-system                 etcd-functional-779000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-apiserver-functional-779000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-779000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kube-proxy-9l64f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 kube-scheduler-functional-779000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m55s                  kube-proxy       
	  Normal   Starting                 74s                    kube-proxy       
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 3m14s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m14s)  kubelet          Node functional-779000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m14s)  kubelet          Node functional-779000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m14s (x7 over 3m14s)  kubelet          Node functional-779000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     3m9s                   kubelet          Node functional-779000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m9s                   kubelet          Node functional-779000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m9s                   kubelet          Node functional-779000 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m9s                   kubelet          Starting kubelet.
	  Normal   NodeReady                3m6s                   kubelet          Node functional-779000 status is now: NodeReady
	  Normal   RegisteredNode           2m56s                  node-controller  Node functional-779000 event: Registered Node functional-779000 in Controller
	  Warning  ContainerGCFailed        2m9s                   kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeNotReady             2m8s                   kubelet          Node functional-779000 status is now: NodeNotReady
	  Normal   RegisteredNode           101s                   node-controller  Node functional-779000 event: Registered Node functional-779000 in Controller
	  Normal   Starting                 78s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-779000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-779000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-779000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           62s                    node-controller  Node functional-779000 event: Registered Node functional-779000 in Controller
	
	* 
	* ==> dmesg <==
	* [ +32.066471] systemd-fstab-generator[4314]: Ignoring "noauto" for root device
	[  +0.146874] systemd-fstab-generator[4348]: Ignoring "noauto" for root device
	[  +0.098944] systemd-fstab-generator[4359]: Ignoring "noauto" for root device
	[  +0.093094] systemd-fstab-generator[4372]: Ignoring "noauto" for root device
	[ +11.394976] systemd-fstab-generator[4927]: Ignoring "noauto" for root device
	[  +0.083169] systemd-fstab-generator[4938]: Ignoring "noauto" for root device
	[  +0.079210] systemd-fstab-generator[4949]: Ignoring "noauto" for root device
	[  +0.084846] systemd-fstab-generator[4960]: Ignoring "noauto" for root device
	[  +0.094014] systemd-fstab-generator[5031]: Ignoring "noauto" for root device
	[  +4.915048] kauditd_printk_skb: 29 callbacks suppressed
	[Sep 6 19:13] systemd-fstab-generator[6658]: Ignoring "noauto" for root device
	[  +0.150445] systemd-fstab-generator[6693]: Ignoring "noauto" for root device
	[  +0.098976] systemd-fstab-generator[6704]: Ignoring "noauto" for root device
	[  +0.103973] systemd-fstab-generator[6717]: Ignoring "noauto" for root device
	[ +11.403517] systemd-fstab-generator[7288]: Ignoring "noauto" for root device
	[  +0.081633] systemd-fstab-generator[7299]: Ignoring "noauto" for root device
	[  +0.089220] systemd-fstab-generator[7310]: Ignoring "noauto" for root device
	[  +0.083599] systemd-fstab-generator[7321]: Ignoring "noauto" for root device
	[  +0.106567] systemd-fstab-generator[7395]: Ignoring "noauto" for root device
	[  +1.460984] systemd-fstab-generator[7640]: Ignoring "noauto" for root device
	[  +3.598481] kauditd_printk_skb: 29 callbacks suppressed
	[ +24.190221] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 19:14] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +13.722069] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.354629] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [5f7c382d194a] <==
	* {"level":"info","ts":"2023-09-06T19:12:54.425853Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-06T19:12:56.086804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-06T19:12:56.086994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-06T19:12:56.087036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-06T19:12:56.087072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-06T19:12:56.087087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T19:12:56.087113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-06T19:12:56.087163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T19:12:56.089398Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-779000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T19:12:56.089596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:12:56.090179Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T19:12:56.090236Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T19:12:56.090279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:12:56.092792Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-06T19:12:56.093066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T19:13:19.206893Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-06T19:13:19.206927Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-779000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-06T19:13:19.206965Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T19:13:19.207001Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T19:13:19.217443Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T19:13:19.217463Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-06T19:13:19.217483Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-06T19:13:19.218996Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T19:13:19.219031Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T19:13:19.219039Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-779000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [84bcf15289c8] <==
	* {"level":"info","ts":"2023-09-06T19:13:33.252466Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-06T19:13:33.252528Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-06T19:13:33.252616Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:13:33.252643Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:13:33.252673Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:13:33.252776Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T19:13:33.252808Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T19:13:33.25308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-06T19:13:33.253123Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-06T19:13:33.253186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:13:33.253215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:13:34.3489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-06T19:13:34.349058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-06T19:13:34.34913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T19:13:34.349165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-06T19:13:34.349233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-06T19:13:34.349286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-06T19:13:34.34933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-06T19:13:34.351703Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-779000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T19:13:34.351726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:13:34.351771Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:13:34.354513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-06T19:13:34.354812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T19:13:34.351979Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T19:13:34.35498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:14:50 up 3 min,  0 users,  load average: 0.26, 0.16, 0.06
	Linux functional-779000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c69af285c107] <==
	* I0906 19:13:35.026506       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0906 19:13:35.027831       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 19:13:35.027847       1 aggregator.go:166] initial CRD sync complete...
	I0906 19:13:35.027851       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 19:13:35.027853       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:13:35.027856       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:13:35.031101       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 19:13:35.049415       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0906 19:13:35.049446       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0906 19:13:35.049463       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 19:13:35.099374       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 19:13:35.928191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:13:36.136588       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0906 19:13:36.137075       1 controller.go:624] quota admission added evaluator for: endpoints
	I0906 19:13:36.138734       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:13:36.583223       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 19:13:36.586447       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 19:13:36.596808       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0906 19:13:36.606229       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:13:36.609133       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:13:54.643879       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.248.163"}
	I0906 19:14:00.037262       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0906 19:14:00.101728       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.196.182"}
	I0906 19:14:10.473439       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.159.86"}
	I0906 19:14:20.911697       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.166.99"}
	
	* 
	* ==> kube-controller-manager [5a100a49eedb] <==
	* I0906 19:13:09.462805       1 shared_informer.go:318] Caches are synced for daemon sets
	I0906 19:13:09.468001       1 shared_informer.go:318] Caches are synced for node
	I0906 19:13:09.468018       1 range_allocator.go:174] "Sending events to api server"
	I0906 19:13:09.468025       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0906 19:13:09.468027       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0906 19:13:09.468030       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0906 19:13:09.469137       1 shared_informer.go:318] Caches are synced for taint
	I0906 19:13:09.469165       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0906 19:13:09.469199       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-779000"
	I0906 19:13:09.469218       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0906 19:13:09.469240       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0906 19:13:09.469273       1 taint_manager.go:211] "Sending events to api server"
	I0906 19:13:09.469374       1 event.go:307] "Event occurred" object="functional-779000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-779000 event: Registered Node functional-779000 in Controller"
	I0906 19:13:09.475119       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0906 19:13:09.516327       1 shared_informer.go:318] Caches are synced for endpoint
	I0906 19:13:09.522280       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0906 19:13:09.522398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.458µs"
	I0906 19:13:09.523053       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0906 19:13:09.528886       1 shared_informer.go:318] Caches are synced for deployment
	I0906 19:13:09.544512       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 19:13:09.561799       1 shared_informer.go:318] Caches are synced for disruption
	I0906 19:13:09.578296       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 19:13:09.892364       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 19:13:09.912540       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 19:13:09.912554       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [848ec6259eec] <==
	* I0906 19:13:48.432898       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 19:13:48.432935       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0906 19:14:00.039486       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0906 19:14:00.052493       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-bfvfs"
	I0906 19:14:00.064475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="25.140499ms"
	I0906 19:14:00.081169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="16.544777ms"
	I0906 19:14:00.081452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="17.083µs"
	I0906 19:14:00.082334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="31.291µs"
	I0906 19:14:07.698444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="30.75µs"
	I0906 19:14:08.701342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="42.249µs"
	I0906 19:14:09.709979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="38.292µs"
	I0906 19:14:18.789585       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0906 19:14:20.854525       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0906 19:14:20.857686       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-2rh64"
	I0906 19:14:20.859910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="5.296162ms"
	I0906 19:14:20.869600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="9.555533ms"
	I0906 19:14:20.869765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="34.458µs"
	I0906 19:14:20.885955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="37.292µs"
	I0906 19:14:21.776289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.334µs"
	I0906 19:14:21.801432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="33.459µs"
	I0906 19:14:22.816023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="21.542µs"
	I0906 19:14:35.452846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="23.292µs"
	I0906 19:14:37.454801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.042µs"
	I0906 19:14:37.917334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="21.667µs"
	I0906 19:14:49.045102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="25.541µs"
	
	* 
	* ==> kube-proxy [a283c9fc4ca5] <==
	* I0906 19:13:36.035084       1 server_others.go:69] "Using iptables proxy"
	I0906 19:13:36.054908       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0906 19:13:36.084283       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 19:13:36.084299       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:13:36.085173       1 server_others.go:152] "Using iptables Proxier"
	I0906 19:13:36.085191       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 19:13:36.085248       1 server.go:846] "Version info" version="v1.28.1"
	I0906 19:13:36.085252       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:13:36.087129       1 config.go:188] "Starting service config controller"
	I0906 19:13:36.087144       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 19:13:36.087155       1 config.go:97] "Starting endpoint slice config controller"
	I0906 19:13:36.087158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 19:13:36.087418       1 config.go:315] "Starting node config controller"
	I0906 19:13:36.087426       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 19:13:36.187494       1 shared_informer.go:318] Caches are synced for node config
	I0906 19:13:36.187503       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 19:13:36.187494       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [fa1f611fa0a2] <==
	* I0906 19:12:54.233602       1 server_others.go:69] "Using iptables proxy"
	I0906 19:12:56.747134       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0906 19:12:56.757972       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 19:12:56.757998       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:12:56.758606       1 server_others.go:152] "Using iptables Proxier"
	I0906 19:12:56.758642       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 19:12:56.758740       1 server.go:846] "Version info" version="v1.28.1"
	I0906 19:12:56.758748       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:12:56.758997       1 config.go:188] "Starting service config controller"
	I0906 19:12:56.759009       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 19:12:56.759016       1 config.go:97] "Starting endpoint slice config controller"
	I0906 19:12:56.759028       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 19:12:56.759231       1 config.go:315] "Starting node config controller"
	I0906 19:12:56.759258       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 19:12:56.859813       1 shared_informer.go:318] Caches are synced for node config
	I0906 19:12:56.859908       1 shared_informer.go:318] Caches are synced for service config
	I0906 19:12:56.859925       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [3972f5ac3c36] <==
	* I0906 19:12:54.909588       1 serving.go:348] Generated self-signed cert in-memory
	W0906 19:12:56.705984       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:12:56.706066       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:12:56.706102       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:12:56.706119       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:12:56.723379       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 19:12:56.723463       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:12:56.724358       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 19:12:56.724435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:12:56.724473       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:12:56.724494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 19:12:56.824997       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:13:19.212845       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 19:13:19.212871       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 19:13:19.212918       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f3377ae28e74] <==
	* I0906 19:13:33.770061       1 serving.go:348] Generated self-signed cert in-memory
	W0906 19:13:34.998430       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:13:34.998520       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:13:34.998554       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:13:34.998571       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:13:35.008957       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 19:13:35.008972       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:13:35.010012       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:13:35.010051       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:13:35.010053       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 19:13:35.010060       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 19:13:35.110586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 19:11:23 UTC, ends at Wed 2023-09-06 19:14:50 UTC. --
	Sep 06 19:14:32 functional-779000 kubelet[7646]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:14:32 functional-779000 kubelet[7646]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:14:32 functional-779000 kubelet[7646]: I0906 19:14:32.521623    7646 scope.go:117] "RemoveContainer" containerID="d7fd69da5c47615b993c9e4f56c6337c802251cff51dc94d6c423c474e140fd2"
	Sep 06 19:14:35 functional-779000 kubelet[7646]: I0906 19:14:35.446925    7646 scope.go:117] "RemoveContainer" containerID="e14c78e491ac7650e78c08e20d882e3821118978e7339c57c02e215216456b78"
	Sep 06 19:14:35 functional-779000 kubelet[7646]: E0906 19:14:35.447041    7646 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-bfvfs_default(d41b6ca7-a298-4090-8212-574cc9b7e1c0)\"" pod="default/hello-node-759d89bdcc-bfvfs" podUID="d41b6ca7-a298-4090-8212-574cc9b7e1c0"
	Sep 06 19:14:35 functional-779000 kubelet[7646]: I0906 19:14:35.452315    7646 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.607273979 podCreationTimestamp="2023-09-06 19:14:30 +0000 UTC" firstStartedPulling="2023-09-06 19:14:31.417362239 +0000 UTC m=+59.050772594" lastFinishedPulling="2023-09-06 19:14:32.262383782 +0000 UTC m=+59.895794137" observedRunningTime="2023-09-06 19:14:32.891376237 +0000 UTC m=+60.524786592" watchObservedRunningTime="2023-09-06 19:14:35.452295522 +0000 UTC m=+63.085705919"
	Sep 06 19:14:37 functional-779000 kubelet[7646]: I0906 19:14:37.447290    7646 scope.go:117] "RemoveContainer" containerID="1fdbfab5bc9a40f57b3a3b4334327440e087d4f23e944fe0300a8a8c15c2e603"
	Sep 06 19:14:37 functional-779000 kubelet[7646]: I0906 19:14:37.910916    7646 scope.go:117] "RemoveContainer" containerID="1fdbfab5bc9a40f57b3a3b4334327440e087d4f23e944fe0300a8a8c15c2e603"
	Sep 06 19:14:37 functional-779000 kubelet[7646]: I0906 19:14:37.911081    7646 scope.go:117] "RemoveContainer" containerID="1ce0e6c38ab01eaa6ab7cda958e2e491faefa82aae24e0cf29a5352ce041d6da"
	Sep 06 19:14:37 functional-779000 kubelet[7646]: E0906 19:14:37.911172    7646 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-2rh64_default(a9da556c-351b-4b07-9214-169bda88f9f9)\"" pod="default/hello-node-connect-7799dfb7c6-2rh64" podUID="a9da556c-351b-4b07-9214-169bda88f9f9"
	Sep 06 19:14:39 functional-779000 kubelet[7646]: I0906 19:14:39.545094    7646 topology_manager.go:215] "Topology Admit Handler" podUID="97723327-f70c-4e49-86ef-5c912e219056" podNamespace="default" podName="busybox-mount"
	Sep 06 19:14:39 functional-779000 kubelet[7646]: I0906 19:14:39.696689    7646 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/97723327-f70c-4e49-86ef-5c912e219056-test-volume\") pod \"busybox-mount\" (UID: \"97723327-f70c-4e49-86ef-5c912e219056\") " pod="default/busybox-mount"
	Sep 06 19:14:39 functional-779000 kubelet[7646]: I0906 19:14:39.696730    7646 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rckc\" (UniqueName: \"kubernetes.io/projected/97723327-f70c-4e49-86ef-5c912e219056-kube-api-access-4rckc\") pod \"busybox-mount\" (UID: \"97723327-f70c-4e49-86ef-5c912e219056\") " pod="default/busybox-mount"
	Sep 06 19:14:39 functional-779000 kubelet[7646]: I0906 19:14:39.989795    7646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e"
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.215811    7646 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rckc\" (UniqueName: \"kubernetes.io/projected/97723327-f70c-4e49-86ef-5c912e219056-kube-api-access-4rckc\") pod \"97723327-f70c-4e49-86ef-5c912e219056\" (UID: \"97723327-f70c-4e49-86ef-5c912e219056\") "
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.216057    7646 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/97723327-f70c-4e49-86ef-5c912e219056-test-volume\") pod \"97723327-f70c-4e49-86ef-5c912e219056\" (UID: \"97723327-f70c-4e49-86ef-5c912e219056\") "
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.216081    7646 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97723327-f70c-4e49-86ef-5c912e219056-test-volume" (OuterVolumeSpecName: "test-volume") pod "97723327-f70c-4e49-86ef-5c912e219056" (UID: "97723327-f70c-4e49-86ef-5c912e219056"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.216435    7646 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97723327-f70c-4e49-86ef-5c912e219056-kube-api-access-4rckc" (OuterVolumeSpecName: "kube-api-access-4rckc") pod "97723327-f70c-4e49-86ef-5c912e219056" (UID: "97723327-f70c-4e49-86ef-5c912e219056"). InnerVolumeSpecName "kube-api-access-4rckc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.316694    7646 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4rckc\" (UniqueName: \"kubernetes.io/projected/97723327-f70c-4e49-86ef-5c912e219056-kube-api-access-4rckc\") on node \"functional-779000\" DevicePath \"\""
	Sep 06 19:14:43 functional-779000 kubelet[7646]: I0906 19:14:43.316706    7646 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/97723327-f70c-4e49-86ef-5c912e219056-test-volume\") on node \"functional-779000\" DevicePath \"\""
	Sep 06 19:14:44 functional-779000 kubelet[7646]: I0906 19:14:44.016332    7646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c643611566c5684ff9f8db90ceec0b9fb13fd7a541ef7af0a6c8f36a990d285e"
	Sep 06 19:14:48 functional-779000 kubelet[7646]: I0906 19:14:48.449376    7646 scope.go:117] "RemoveContainer" containerID="e14c78e491ac7650e78c08e20d882e3821118978e7339c57c02e215216456b78"
	Sep 06 19:14:49 functional-779000 kubelet[7646]: I0906 19:14:49.039483    7646 scope.go:117] "RemoveContainer" containerID="e14c78e491ac7650e78c08e20d882e3821118978e7339c57c02e215216456b78"
	Sep 06 19:14:49 functional-779000 kubelet[7646]: I0906 19:14:49.039643    7646 scope.go:117] "RemoveContainer" containerID="4a8f9b332bdec01f7082d82a1750fc38d4f4bc0cac5e68b5eba622bed2ffe9dd"
	Sep 06 19:14:49 functional-779000 kubelet[7646]: E0906 19:14:49.039729    7646 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-bfvfs_default(d41b6ca7-a298-4090-8212-574cc9b7e1c0)\"" pod="default/hello-node-759d89bdcc-bfvfs" podUID="d41b6ca7-a298-4090-8212-574cc9b7e1c0"
	
	* 
	* ==> storage-provisioner [6aa11675da16] <==
	* I0906 19:13:50.534122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:13:50.540955       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:13:50.540972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 19:14:07.926563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 19:14:07.926628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-779000_c653554c-d0e9-4261-bb84-6f48d07482a9!
	I0906 19:14:07.926987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83c0d72a-5d42-4dfd-935b-c07ee3bb2126", APIVersion:"v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-779000_c653554c-d0e9-4261-bb84-6f48d07482a9 became leader
	I0906 19:14:08.027193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-779000_c653554c-d0e9-4261-bb84-6f48d07482a9!
	I0906 19:14:18.790479       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0906 19:14:18.790565       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f388f8ad-2d8b-4bd4-9d30-697da4b17d34 403 0 2023-09-06 19:11:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-06 19:11:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-fea4637f-95f0-4baf-8c40-2b8cea917273 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  fea4637f-95f0-4baf-8c40-2b8cea917273 752 0 2023-09-06 19:14:18 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-06 19:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-06 19:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0906 19:14:18.791220       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-fea4637f-95f0-4baf-8c40-2b8cea917273" provisioned
	I0906 19:14:18.791233       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0906 19:14:18.791236       1 volume_store.go:212] Trying to save persistentvolume "pvc-fea4637f-95f0-4baf-8c40-2b8cea917273"
	I0906 19:14:18.791796       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fea4637f-95f0-4baf-8c40-2b8cea917273", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0906 19:14:18.797408       1 volume_store.go:219] persistentvolume "pvc-fea4637f-95f0-4baf-8c40-2b8cea917273" saved
	I0906 19:14:18.797549       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fea4637f-95f0-4baf-8c40-2b8cea917273", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fea4637f-95f0-4baf-8c40-2b8cea917273
	
	* 
	* ==> storage-provisioner [c42788f564b6] <==
	* I0906 19:13:36.135044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0906 19:13:36.135646       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-779000 -n functional-779000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-779000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-779000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-779000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-779000/192.168.105.4
	Start Time:       Wed, 06 Sep 2023 12:14:39 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://0cb5eaa0b475c20bc66a06a21910ff349ccbe41f00aacb342e2aa4f25ad21939
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 06 Sep 2023 12:14:41 -0700
	      Finished:     Wed, 06 Sep 2023 12:14:41 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rckc (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4rckc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-779000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.042s (1.042s including waiting)
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0906 12:14:10.223901    1949 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:10.224138    1949 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:10.224142    1949 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:10.224144    1949 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:10.224273    1949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:10.224493    1949 mustload.go:65] Loading cluster: functional-779000
I0906 12:14:10.224693    1949 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:10.229198    1949 out.go:177] 
W0906 12:14:10.232258    1949 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/monitor: connect: connection refused
W0906 12:14:10.232263    1949 out.go:239] * 
* 
W0906 12:14:10.233592    1949 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 12:14:10.237082    1949 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1948: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-147000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 68351ea4ff60
	Removing intermediate container 68351ea4ff60
	 ---> e0a4d27c79ec
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1e2ed2f0c05a
	Removing intermediate container 1e2ed2f0c05a
	 ---> cf4974c89c57
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 86c37d6d9279
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-147000 -n image-147000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-147000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| start          | -p functional-779000                     | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-779000 --dry-run           | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-779000                     | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | -p functional-779000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh findmnt            | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-779000 ssh pgrep              | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-779000 image build -t         | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | localhost/my-image:functional-779000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-779000 image ls               | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	| delete         | -p functional-779000                     | functional-779000 | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	| start          | -p image-147000 --driver=qemu2           | image-147000      | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:15 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000      | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-147000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000      | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-147000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:14:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:14:59.749415    2174 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:14:59.749531    2174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:59.749533    2174 out.go:309] Setting ErrFile to fd 2...
	I0906 12:14:59.749534    2174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:59.749639    2174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:14:59.750659    2174 out.go:303] Setting JSON to false
	I0906 12:14:59.766109    2174 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":873,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:14:59.766193    2174 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:14:59.770496    2174 out.go:177] * [image-147000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:14:59.778542    2174 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:14:59.778556    2174 notify.go:220] Checking for updates...
	I0906 12:14:59.782371    2174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:14:59.785455    2174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:14:59.788441    2174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:14:59.791413    2174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:14:59.794457    2174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:14:59.797588    2174 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:14:59.801392    2174 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:14:59.808459    2174 start.go:298] selected driver: qemu2
	I0906 12:14:59.808464    2174 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:14:59.808469    2174 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:14:59.808523    2174 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:14:59.811460    2174 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:14:59.816622    2174 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 12:14:59.816707    2174 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:14:59.816722    2174 cni.go:84] Creating CNI manager for ""
	I0906 12:14:59.816727    2174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:14:59.816749    2174 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:14:59.816764    2174 start_flags.go:321] config:
	{Name:image-147000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:14:59.820745    2174 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:14:59.827357    2174 out.go:177] * Starting control plane node image-147000 in cluster image-147000
	I0906 12:14:59.831489    2174 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:14:59.831504    2174 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:14:59.831521    2174 cache.go:57] Caching tarball of preloaded images
	I0906 12:14:59.831577    2174 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:14:59.831581    2174 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:14:59.831770    2174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/config.json ...
	I0906 12:14:59.831780    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/config.json: {Name:mk8f004f90bbf4bfc31ff7b204f4bce0925f5daf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:14:59.831966    2174 start.go:365] acquiring machines lock for image-147000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:14:59.831989    2174 start.go:369] acquired machines lock for "image-147000" in 20.542µs
	I0906 12:14:59.831997    2174 start.go:93] Provisioning new machine with config: &{Name:image-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:14:59.832022    2174 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:14:59.839432    2174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 12:14:59.860416    2174 start.go:159] libmachine.API.Create for "image-147000" (driver="qemu2")
	I0906 12:14:59.860433    2174 client.go:168] LocalClient.Create starting
	I0906 12:14:59.860498    2174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:14:59.860523    2174 main.go:141] libmachine: Decoding PEM data...
	I0906 12:14:59.860534    2174 main.go:141] libmachine: Parsing certificate...
	I0906 12:14:59.860572    2174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:14:59.860596    2174 main.go:141] libmachine: Decoding PEM data...
	I0906 12:14:59.860605    2174 main.go:141] libmachine: Parsing certificate...
	I0906 12:14:59.860902    2174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:15:00.050508    2174 main.go:141] libmachine: Creating SSH key...
	I0906 12:15:00.213938    2174 main.go:141] libmachine: Creating Disk image...
	I0906 12:15:00.213944    2174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:15:00.214164    2174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2
	I0906 12:15:00.227492    2174 main.go:141] libmachine: STDOUT: 
	I0906 12:15:00.227507    2174 main.go:141] libmachine: STDERR: 
	I0906 12:15:00.227571    2174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2 +20000M
	I0906 12:15:00.237166    2174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:15:00.237179    2174 main.go:141] libmachine: STDERR: 
	I0906 12:15:00.237201    2174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2
	I0906 12:15:00.237207    2174 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:15:00.237248    2174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1c:6b:d2:00:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/disk.qcow2
	I0906 12:15:00.275028    2174 main.go:141] libmachine: STDOUT: 
	I0906 12:15:00.275046    2174 main.go:141] libmachine: STDERR: 
	I0906 12:15:00.275049    2174 main.go:141] libmachine: Attempt 0
	I0906 12:15:00.275061    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:00.275153    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:00.275171    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:00.275175    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:00.275180    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:02.277316    2174 main.go:141] libmachine: Attempt 1
	I0906 12:15:02.277409    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:02.277694    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:02.277737    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:02.277763    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:02.277820    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:04.278885    2174 main.go:141] libmachine: Attempt 2
	I0906 12:15:04.278896    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:04.279006    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:04.279015    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:04.279019    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:04.279024    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:06.281035    2174 main.go:141] libmachine: Attempt 3
	I0906 12:15:06.281039    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:06.281071    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:06.281076    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:06.281084    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:06.281088    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:08.283124    2174 main.go:141] libmachine: Attempt 4
	I0906 12:15:08.283137    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:08.283234    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:08.283260    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:08.283265    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:08.283270    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:10.285305    2174 main.go:141] libmachine: Attempt 5
	I0906 12:15:10.285318    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:10.285405    2174 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 12:15:10.285413    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:10.285418    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:10.285426    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:12.287492    2174 main.go:141] libmachine: Attempt 6
	I0906 12:15:12.287522    2174 main.go:141] libmachine: Searching for 26:1c:6b:d2:0:15 in /var/db/dhcpd_leases ...
	I0906 12:15:12.287672    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:12.287686    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:12.287691    2174 main.go:141] libmachine: Found match: 26:1c:6b:d2:0:15
	I0906 12:15:12.287703    2174 main.go:141] libmachine: IP: 192.168.105.5
	I0906 12:15:12.287710    2174 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0906 12:15:14.306873    2174 machine.go:88] provisioning docker machine ...
	I0906 12:15:14.306928    2174 buildroot.go:166] provisioning hostname "image-147000"
	I0906 12:15:14.307130    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:14.307958    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:14.307970    2174 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-147000 && echo "image-147000" | sudo tee /etc/hostname
	I0906 12:15:14.413593    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: image-147000
	
	I0906 12:15:14.413729    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:14.414307    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:14.414320    2174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-147000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-147000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-147000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:15:14.499508    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:15:14.499521    2174 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17116-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17116-1006/.minikube}
	I0906 12:15:14.499533    2174 buildroot.go:174] setting up certificates
	I0906 12:15:14.499540    2174 provision.go:83] configureAuth start
	I0906 12:15:14.499546    2174 provision.go:138] copyHostCerts
	I0906 12:15:14.499663    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem, removing ...
	I0906 12:15:14.499670    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem
	I0906 12:15:14.499859    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem (1078 bytes)
	I0906 12:15:14.500144    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem, removing ...
	I0906 12:15:14.500149    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem
	I0906 12:15:14.500220    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem (1123 bytes)
	I0906 12:15:14.500376    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem, removing ...
	I0906 12:15:14.500378    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem
	I0906 12:15:14.500440    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem (1679 bytes)
	I0906 12:15:14.500566    2174 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem org=jenkins.image-147000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-147000]
	I0906 12:15:14.598561    2174 provision.go:172] copyRemoteCerts
	I0906 12:15:14.598604    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:15:14.598612    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:14.636941    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:15:14.643966    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 12:15:14.650827    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:15:14.658059    2174 provision.go:86] duration metric: configureAuth took 158.512042ms
	I0906 12:15:14.658064    2174 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:15:14.658168    2174 config.go:182] Loaded profile config "image-147000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:15:14.658202    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:14.658416    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:14.658419    2174 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:15:14.729243    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:15:14.729247    2174 buildroot.go:70] root file system type: tmpfs
	I0906 12:15:14.729299    2174 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:15:14.729350    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:14.729611    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:14.729654    2174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:15:14.805749    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:15:14.805803    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:14.806072    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:14.806080    2174 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:15:15.141318    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:15:15.141326    2174 machine.go:91] provisioned docker machine in 834.445833ms
	I0906 12:15:15.141331    2174 client.go:171] LocalClient.Create took 15.281007042s
	I0906 12:15:15.141350    2174 start.go:167] duration metric: libmachine.API.Create for "image-147000" took 15.28105s
	I0906 12:15:15.141352    2174 start.go:300] post-start starting for "image-147000" (driver="qemu2")
	I0906 12:15:15.141356    2174 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:15:15.141422    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:15:15.141429    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:15.179754    2174 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:15:15.182685    2174 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:15:15.182697    2174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/addons for local assets ...
	I0906 12:15:15.182787    2174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/files for local assets ...
	I0906 12:15:15.182892    2174 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem -> 14212.pem in /etc/ssl/certs
	I0906 12:15:15.183014    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:15:15.186659    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:15:15.194022    2174 start.go:303] post-start completed in 52.662542ms
	I0906 12:15:15.194470    2174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/config.json ...
	I0906 12:15:15.194620    2174 start.go:128] duration metric: createHost completed in 15.362707917s
	I0906 12:15:15.194651    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:15.194864    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010763b0] 0x101078e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 12:15:15.194868    2174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 12:15:15.264860    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694027715.547195210
	
	I0906 12:15:15.264864    2174 fix.go:206] guest clock: 1694027715.547195210
	I0906 12:15:15.264867    2174 fix.go:219] Guest: 2023-09-06 12:15:15.54719521 -0700 PDT Remote: 2023-09-06 12:15:15.194623 -0700 PDT m=+15.465197335 (delta=352.57221ms)
	I0906 12:15:15.264876    2174 fix.go:190] guest clock delta is within tolerance: 352.57221ms
	I0906 12:15:15.264878    2174 start.go:83] releasing machines lock for "image-147000", held for 15.432999208s
	I0906 12:15:15.265137    2174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:15:15.265138    2174 ssh_runner.go:195] Run: cat /version.json
	I0906 12:15:15.265148    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:15.265157    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:15.302564    2174 ssh_runner.go:195] Run: systemctl --version
	I0906 12:15:15.344273    2174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:15:15.346442    2174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:15:15.346471    2174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:15:15.352164    2174 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:15:15.352168    2174 start.go:466] detecting cgroup driver to use...
	I0906 12:15:15.352241    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:15:15.358398    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 12:15:15.361614    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:15:15.364685    2174 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:15:15.364709    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:15:15.367703    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:15:15.370950    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:15:15.374314    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:15:15.377303    2174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:15:15.380119    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:15:15.383288    2174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:15:15.386407    2174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:15:15.389073    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:15.450160    2174 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:15:15.457494    2174 start.go:466] detecting cgroup driver to use...
	I0906 12:15:15.457556    2174 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:15:15.462964    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:15:15.467812    2174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:15:15.477436    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:15:15.482057    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:15:15.486830    2174 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:15:15.522871    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:15:15.528005    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:15:15.533101    2174 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:15:15.534456    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:15:15.537089    2174 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:15:15.541767    2174 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:15:15.605905    2174 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:15:15.670406    2174 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:15:15.670415    2174 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 12:15:15.675943    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:15.758869    2174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:15:16.906712    2174 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.147840792s)
	I0906 12:15:16.906770    2174 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:15:16.969980    2174 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:15:17.029212    2174 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:15:17.087760    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:17.148142    2174 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:15:17.155791    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:17.221145    2174 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 12:15:17.243732    2174 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:15:17.243817    2174 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:15:17.246378    2174 start.go:534] Will wait 60s for crictl version
	I0906 12:15:17.246492    2174 ssh_runner.go:195] Run: which crictl
	I0906 12:15:17.248648    2174 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:15:17.265122    2174 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0906 12:15:17.265202    2174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:15:17.276929    2174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:15:17.292343    2174 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0906 12:15:17.292471    2174 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 12:15:17.293776    2174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:15:17.297224    2174 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:15:17.297264    2174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:15:17.302391    2174 docker.go:636] Got preloaded images: 
	I0906 12:15:17.302395    2174 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0906 12:15:17.302430    2174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:15:17.305457    2174 ssh_runner.go:195] Run: which lz4
	I0906 12:15:17.306788    2174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 12:15:17.308004    2174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:15:17.308015    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0906 12:15:18.621594    2174 docker.go:600] Took 1.314854 seconds to copy over tarball
	I0906 12:15:18.621650    2174 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:15:19.651006    2174 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.02934725s)
	I0906 12:15:19.651020    2174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:15:19.667207    2174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:15:19.670696    2174 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0906 12:15:19.675822    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:19.737624    2174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:15:21.256577    2174 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.518952417s)
	I0906 12:15:21.256657    2174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:15:21.262788    2174 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:15:21.262793    2174 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:15:21.262849    2174 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:15:21.270663    2174 cni.go:84] Creating CNI manager for ""
	I0906 12:15:21.270669    2174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:15:21.270682    2174 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 12:15:21.270690    2174 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-147000 NodeName:image-147000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:15:21.270754    2174 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-147000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:15:21.270792    2174 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-147000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 12:15:21.270841    2174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 12:15:21.274237    2174 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:15:21.274267    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:15:21.276908    2174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0906 12:15:21.281658    2174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:15:21.286479    2174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0906 12:15:21.291484    2174 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0906 12:15:21.292822    2174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:15:21.296178    2174 certs.go:56] Setting up /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000 for IP: 192.168.105.5
	I0906 12:15:21.296186    2174 certs.go:190] acquiring lock for shared ca certs: {Name:mk2fda2e4681223badcda373e6897c8a04d70962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.296322    2174 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key
	I0906 12:15:21.296365    2174 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key
	I0906 12:15:21.296389    2174 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.key
	I0906 12:15:21.296394    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.crt with IP's: []
	I0906 12:15:21.409621    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.crt ...
	I0906 12:15:21.409624    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.crt: {Name:mk8a10bb8b7e0af73de96109496ce33ed8374c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.409847    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.key ...
	I0906 12:15:21.409849    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/client.key: {Name:mkf69b41c14c1bffcb25d4e55fe74357d320fe52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.409969    2174 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key.e69b33ca
	I0906 12:15:21.409974    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 12:15:21.543784    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt.e69b33ca ...
	I0906 12:15:21.543786    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt.e69b33ca: {Name:mkbd8a0d901baffc594e488993bc92b4516fde76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.543935    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key.e69b33ca ...
	I0906 12:15:21.543937    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key.e69b33ca: {Name:mk06f1d55f3eb7cb7047766b375ad350a4835d48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.544065    2174 certs.go:337] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt
	I0906 12:15:21.544252    2174 certs.go:341] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key
	I0906 12:15:21.544348    2174 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.key
	I0906 12:15:21.544354    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.crt with IP's: []
	I0906 12:15:21.648809    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.crt ...
	I0906 12:15:21.648813    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.crt: {Name:mkab1086d17de11d3be9cd1c5030832bf7058f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.649031    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.key ...
	I0906 12:15:21.649033    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.key: {Name:mk5cccaca2925079227dfcff4659833c46808183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:21.649275    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem (1338 bytes)
	W0906 12:15:21.649300    2174 certs.go:433] ignoring /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421_empty.pem, impossibly tiny 0 bytes
	I0906 12:15:21.649305    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:15:21.649322    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:15:21.649338    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:15:21.649353    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem (1679 bytes)
	I0906 12:15:21.649391    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:15:21.649662    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 12:15:21.656895    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:15:21.663662    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:15:21.670827    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/image-147000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:15:21.677804    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:15:21.684481    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:15:21.691421    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:15:21.698467    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:15:21.705220    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:15:21.711667    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem --> /usr/share/ca-certificates/1421.pem (1338 bytes)
	I0906 12:15:21.718684    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /usr/share/ca-certificates/14212.pem (1708 bytes)
	I0906 12:15:21.726060    2174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:15:21.731137    2174 ssh_runner.go:195] Run: openssl version
	I0906 12:15:21.733162    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14212.pem && ln -fs /usr/share/ca-certificates/14212.pem /etc/ssl/certs/14212.pem"
	I0906 12:15:21.736088    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14212.pem
	I0906 12:15:21.737578    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 19:11 /usr/share/ca-certificates/14212.pem
	I0906 12:15:21.737594    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14212.pem
	I0906 12:15:21.739410    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14212.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:15:21.743008    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:15:21.746331    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:15:21.747874    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:10 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:15:21.747893    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:15:21.749814    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:15:21.752589    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1421.pem && ln -fs /usr/share/ca-certificates/1421.pem /etc/ssl/certs/1421.pem"
	I0906 12:15:21.755620    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1421.pem
	I0906 12:15:21.757315    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 19:11 /usr/share/ca-certificates/1421.pem
	I0906 12:15:21.757334    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1421.pem
	I0906 12:15:21.759201    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1421.pem /etc/ssl/certs/51391683.0"
	I0906 12:15:21.762865    2174 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 12:15:21.764404    2174 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 12:15:21.764431    2174 kubeadm.go:404] StartCluster: {Name:image-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:15:21.764490    2174 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:15:21.770025    2174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:15:21.772740    2174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:15:21.775410    2174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:15:21.778421    2174 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:15:21.778433    2174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:15:21.804734    2174 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 12:15:21.804758    2174 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 12:15:21.860919    2174 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:15:21.860973    2174 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:15:21.861011    2174 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:15:21.917611    2174 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:15:21.926811    2174 out.go:204]   - Generating certificates and keys ...
	I0906 12:15:21.926857    2174 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 12:15:21.926883    2174 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 12:15:22.043453    2174 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 12:15:22.417891    2174 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 12:15:22.609963    2174 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 12:15:22.800956    2174 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 12:15:22.977054    2174 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 12:15:22.977106    2174 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-147000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0906 12:15:23.063271    2174 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 12:15:23.063324    2174 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-147000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0906 12:15:23.108121    2174 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 12:15:23.202803    2174 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 12:15:23.254643    2174 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 12:15:23.254669    2174 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:15:23.341418    2174 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:15:23.511052    2174 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:15:23.850583    2174 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:15:24.032742    2174 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:15:24.032924    2174 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:15:24.034185    2174 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:15:24.042574    2174 out.go:204]   - Booting up control plane ...
	I0906 12:15:24.042640    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:15:24.042677    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:15:24.042739    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:15:24.042793    2174 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:15:24.043182    2174 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:15:24.043245    2174 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 12:15:24.113565    2174 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:15:28.114933    2174 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002321 seconds
	I0906 12:15:28.114987    2174 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:15:28.120103    2174 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:15:28.629376    2174 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:15:28.629477    2174 kubeadm.go:322] [mark-control-plane] Marking the node image-147000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:15:29.135163    2174 kubeadm.go:322] [bootstrap-token] Using token: zovg05.ls7wvvd1s4h3qgcd
	I0906 12:15:29.141371    2174 out.go:204]   - Configuring RBAC rules ...
	I0906 12:15:29.141430    2174 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:15:29.142675    2174 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:15:29.146647    2174 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:15:29.147836    2174 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:15:29.148867    2174 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:15:29.150204    2174 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:15:29.154295    2174 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:15:29.320676    2174 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 12:15:29.545143    2174 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 12:15:29.545547    2174 kubeadm.go:322] 
	I0906 12:15:29.545578    2174 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 12:15:29.545580    2174 kubeadm.go:322] 
	I0906 12:15:29.545613    2174 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 12:15:29.545615    2174 kubeadm.go:322] 
	I0906 12:15:29.545626    2174 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 12:15:29.545657    2174 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:15:29.545682    2174 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:15:29.545685    2174 kubeadm.go:322] 
	I0906 12:15:29.545714    2174 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 12:15:29.545716    2174 kubeadm.go:322] 
	I0906 12:15:29.545742    2174 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:15:29.545744    2174 kubeadm.go:322] 
	I0906 12:15:29.545769    2174 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 12:15:29.545810    2174 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:15:29.545840    2174 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:15:29.545842    2174 kubeadm.go:322] 
	I0906 12:15:29.545886    2174 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:15:29.545929    2174 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 12:15:29.545931    2174 kubeadm.go:322] 
	I0906 12:15:29.545976    2174 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zovg05.ls7wvvd1s4h3qgcd \
	I0906 12:15:29.546027    2174 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d \
	I0906 12:15:29.546042    2174 kubeadm.go:322] 	--control-plane 
	I0906 12:15:29.546043    2174 kubeadm.go:322] 
	I0906 12:15:29.546081    2174 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:15:29.546082    2174 kubeadm.go:322] 
	I0906 12:15:29.546132    2174 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zovg05.ls7wvvd1s4h3qgcd \
	I0906 12:15:29.546185    2174 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d 
	I0906 12:15:29.546257    2174 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:15:29.546263    2174 cni.go:84] Creating CNI manager for ""
	I0906 12:15:29.546270    2174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:15:29.553399    2174 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:15:29.557468    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:15:29.560585    2174 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 12:15:29.565136    2174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:15:29.565183    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:15:29.565201    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=image-147000 minikube.k8s.io/updated_at=2023_09_06T12_15_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:15:29.631539    2174 kubeadm.go:1081] duration metric: took 66.386709ms to wait for elevateKubeSystemPrivileges.
	I0906 12:15:29.631546    2174 ops.go:34] apiserver oom_adj: -16
	I0906 12:15:29.631550    2174 kubeadm.go:406] StartCluster complete in 7.867178292s
	I0906 12:15:29.631559    2174 settings.go:142] acquiring lock: {Name:mkdab5683cd98d968361f82dee37aa31492af7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:29.631641    2174 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:15:29.631943    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/kubeconfig: {Name:mk69a76938a18011410dd32eccb7fee080824c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:29.632143    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 12:15:29.632186    2174 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 12:15:29.632222    2174 addons.go:69] Setting storage-provisioner=true in profile "image-147000"
	I0906 12:15:29.632228    2174 addons.go:231] Setting addon storage-provisioner=true in "image-147000"
	I0906 12:15:29.632234    2174 addons.go:69] Setting default-storageclass=true in profile "image-147000"
	I0906 12:15:29.632241    2174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-147000"
	I0906 12:15:29.632253    2174 host.go:66] Checking if "image-147000" exists ...
	I0906 12:15:29.632253    2174 config.go:182] Loaded profile config "image-147000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:15:29.637396    2174 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:15:29.641409    2174 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:15:29.641413    2174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:15:29.641420    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:29.647448    2174 addons.go:231] Setting addon default-storageclass=true in "image-147000"
	I0906 12:15:29.647466    2174 host.go:66] Checking if "image-147000" exists ...
	I0906 12:15:29.648142    2174 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:15:29.648146    2174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:15:29.648151    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/image-147000/id_rsa Username:docker}
	I0906 12:15:29.651430    2174 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-147000" context rescaled to 1 replicas
	I0906 12:15:29.651445    2174 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:15:29.659296    2174 out.go:177] * Verifying Kubernetes components...
	I0906 12:15:29.663460    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:15:29.679660    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 12:15:29.679915    2174 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:15:29.679948    2174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:15:29.709365    2174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:15:29.740092    2174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:15:30.133676    2174 start.go:907] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0906 12:15:30.133695    2174 api_server.go:72] duration metric: took 482.244791ms to wait for apiserver process to appear ...
	I0906 12:15:30.133699    2174 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:15:30.133706    2174 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0906 12:15:30.137135    2174 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0906 12:15:30.137740    2174 api_server.go:141] control plane version: v1.28.1
	I0906 12:15:30.137744    2174 api_server.go:131] duration metric: took 4.043583ms to wait for apiserver health ...
	I0906 12:15:30.137750    2174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:15:30.140378    2174 system_pods.go:59] 4 kube-system pods found
	I0906 12:15:30.140387    2174 system_pods.go:61] "etcd-image-147000" [6d662a22-5768-45f6-8f83-2822bdd5c5d1] Pending
	I0906 12:15:30.140390    2174 system_pods.go:61] "kube-apiserver-image-147000" [7db3c0c0-eb30-49cd-89a7-152c465e1216] Pending
	I0906 12:15:30.140392    2174 system_pods.go:61] "kube-controller-manager-image-147000" [b98d6147-39ab-4cdc-89c9-02a47cfc4339] Pending
	I0906 12:15:30.140394    2174 system_pods.go:61] "kube-scheduler-image-147000" [f591a2b5-fd6a-4b98-9e36-588cfcc084d8] Pending
	I0906 12:15:30.140396    2174 system_pods.go:74] duration metric: took 2.644292ms to wait for pod list to return data ...
	I0906 12:15:30.140400    2174 kubeadm.go:581] duration metric: took 488.949875ms to wait for : map[apiserver:true system_pods:true] ...
	I0906 12:15:30.140406    2174 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:15:30.141869    2174 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 12:15:30.141876    2174 node_conditions.go:123] node cpu capacity is 2
	I0906 12:15:30.141881    2174 node_conditions.go:105] duration metric: took 1.472875ms to run NodePressure ...
	I0906 12:15:30.141886    2174 start.go:228] waiting for startup goroutines ...
	I0906 12:15:30.197524    2174 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 12:15:30.205412    2174 addons.go:502] enable addons completed in 573.2235ms: enabled=[default-storageclass storage-provisioner]
	I0906 12:15:30.205429    2174 start.go:233] waiting for cluster config update ...
	I0906 12:15:30.205435    2174 start.go:242] writing updated cluster config ...
	I0906 12:15:30.205736    2174 ssh_runner.go:195] Run: rm -f paused
	I0906 12:15:30.232938    2174 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0906 12:15:30.236434    2174 out.go:177] * Done! kubectl is now configured to use "image-147000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 19:15:11 UTC, ends at Wed 2023-09-06 19:15:32 UTC. --
	Sep 06 19:15:25 image-147000 cri-dockerd[1056]: time="2023-09-06T19:15:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a2fc13eb28c516e76c14871d31e25f002ad496de54e708d71d9c322ee6ee5363/resolv.conf as [nameserver 192.168.105.1]"
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.334256298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.334385173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.334410798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.334448507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.348739090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.348881257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.348927423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.348985632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:25 image-147000 cri-dockerd[1056]: time="2023-09-06T19:15:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ca11ff3076130789636c8bf03b2cf8552f76570177e89d3386c3fe857c5dfdb/resolv.conf as [nameserver 192.168.105.1]"
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.416781632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.416903632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.416917382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:15:25 image-147000 dockerd[1165]: time="2023-09-06T19:15:25.416926507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:31 image-147000 dockerd[1158]: time="2023-09-06T19:15:31.541367301Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 19:15:31 image-147000 dockerd[1158]: time="2023-09-06T19:15:31.667045551Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 19:15:31 image-147000 dockerd[1158]: time="2023-09-06T19:15:31.681834885Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.716393885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.716423718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.716429760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.716605676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:15:31 image-147000 dockerd[1158]: time="2023-09-06T19:15:31.863826801Z" level=info msg="ignoring event" container=86c37d6d92793af524c682ea73596ebb3869203f51659bad64c94bb2840adb5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.864034801Z" level=info msg="shim disconnected" id=86c37d6d92793af524c682ea73596ebb3869203f51659bad64c94bb2840adb5b namespace=moby
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.864081635Z" level=warning msg="cleaning up after shim disconnected" id=86c37d6d92793af524c682ea73596ebb3869203f51659bad64c94bb2840adb5b namespace=moby
	Sep 06 19:15:31 image-147000 dockerd[1165]: time="2023-09-06T19:15:31.864086010Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	862d77c711132       b4a5a57e99492       7 seconds ago       Running             kube-scheduler            0                   4ca11ff307613
	facb64e2b42e9       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   a2fc13eb28c51
	25310df0d702d       b29fb62480892       7 seconds ago       Running             kube-apiserver            0                   d10b4a4e38f54
	ca00f7ba61e81       8b6e1980b7584       7 seconds ago       Running             kube-controller-manager   0                   49a1a09d2f6f4
	
	* 
	* ==> describe nodes <==
	* Name:               image-147000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-147000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=image-147000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T12_15_29_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 19:15:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-147000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 19:15:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 19:15:29 +0000   Wed, 06 Sep 2023 19:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 19:15:29 +0000   Wed, 06 Sep 2023 19:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 19:15:29 +0000   Wed, 06 Sep 2023 19:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 06 Sep 2023 19:15:29 +0000   Wed, 06 Sep 2023 19:15:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-147000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 741c2940f75a41d99f380fb2a2240b5e
	  System UUID:                741c2940f75a41d99f380fb2a2240b5e
	  Boot ID:                    cb07a3b6-cce2-445c-9dab-b227f19c1fd3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-147000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-147000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-147000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-147000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-147000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-147000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-147000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-147000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-147000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-147000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep 6 19:15] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.669045] EINJ: EINJ table not found.
	[  +0.521919] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043531] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000896] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.162083] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.062244] systemd-fstab-generator[488]: Ignoring "noauto" for root device
	[  +0.438913] systemd-fstab-generator[756]: Ignoring "noauto" for root device
	[  +0.157184] systemd-fstab-generator[793]: Ignoring "noauto" for root device
	[  +0.062907] systemd-fstab-generator[804]: Ignoring "noauto" for root device
	[  +0.090747] systemd-fstab-generator[817]: Ignoring "noauto" for root device
	[  +1.210690] systemd-fstab-generator[975]: Ignoring "noauto" for root device
	[  +0.060927] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.057281] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +0.061806] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.069824] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.519136] systemd-fstab-generator[1151]: Ignoring "noauto" for root device
	[  +1.496041] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.874920] systemd-fstab-generator[1484]: Ignoring "noauto" for root device
	[  +5.106205] systemd-fstab-generator[2372]: Ignoring "noauto" for root device
	[  +2.240717] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [facb64e2b42e] <==
	* {"level":"info","ts":"2023-09-06T19:15:25.433866Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:15:25.43388Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:15:25.433883Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T19:15:25.434074Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-06T19:15:25.434078Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-06T19:15:25.434536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-06T19:15:25.434583Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-06T19:15:26.422492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-06T19:15:26.422632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-06T19:15:26.422679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-06T19:15:26.422717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-06T19:15:26.422733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-06T19:15:26.422768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-06T19:15:26.422787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-06T19:15:26.425144Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-147000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T19:15:26.425147Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:15:26.4254Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:15:26.426303Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:15:26.426613Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:15:26.426809Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T19:15:26.426984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T19:15:26.427928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T19:15:26.429298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-06T19:15:26.429566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T19:15:26.42969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:15:32 up 0 min,  0 users,  load average: 0.07, 0.02, 0.00
	Linux image-147000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [25310df0d702] <==
	* I0906 19:15:27.111107       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 19:15:27.111132       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 19:15:27.111215       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0906 19:15:27.111218       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0906 19:15:27.111238       1 aggregator.go:166] initial CRD sync complete...
	I0906 19:15:27.111242       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 19:15:27.111245       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:15:27.111250       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:15:27.111637       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:15:27.111764       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0906 19:15:27.112259       1 controller.go:624] quota admission added evaluator for: namespaces
	I0906 19:15:27.125580       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:15:28.027970       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0906 19:15:28.030367       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 19:15:28.030376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:15:28.157130       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:15:28.172425       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:15:28.221293       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0906 19:15:28.223436       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0906 19:15:28.223734       1 controller.go:624] quota admission added evaluator for: endpoints
	I0906 19:15:28.224877       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:15:29.050097       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 19:15:29.598508       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 19:15:29.602358       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0906 19:15:29.606056       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ca00f7ba61e8] <==
	* I0906 19:15:25.670331       1 serving.go:348] Generated self-signed cert in-memory
	I0906 19:15:25.877759       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0906 19:15:25.877796       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:15:25.878486       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:15:25.878590       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:15:25.879194       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0906 19:15:25.879265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 19:15:29.047130       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0906 19:15:29.051605       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0906 19:15:29.051699       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0906 19:15:29.051704       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0906 19:15:29.053904       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0906 19:15:29.053945       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0906 19:15:29.053949       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0906 19:15:29.056078       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0906 19:15:29.056131       1 controller.go:169] "Starting ephemeral volume controller"
	I0906 19:15:29.056137       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0906 19:15:29.058473       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0906 19:15:29.058530       1 gc_controller.go:103] "Starting GC controller"
	I0906 19:15:29.058541       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0906 19:15:29.149223       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [862d77c71113] <==
	* W0906 19:15:27.085132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:15:27.085164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 19:15:27.085210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:27.085243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 19:15:27.085289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:27.085309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 19:15:27.085357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:15:27.085372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 19:15:27.085393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:27.085405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 19:15:27.085439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:15:27.085451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 19:15:27.085468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:15:27.085480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 19:15:27.085507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 19:15:27.085517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 19:15:27.085533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:15:27.085545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 19:15:27.085561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:15:27.085590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 19:15:27.085610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 19:15:27.085623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 19:15:27.945987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:15:27.946006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0906 19:15:28.570209       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 19:15:11 UTC, ends at Wed 2023-09-06 19:15:32 UTC. --
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.666550    2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.743298    2378 kubelet_node_status.go:70] "Attempting to register node" node="image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.747376    2378 kubelet_node_status.go:108] "Node was previously registered" node="image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.747408    2378 kubelet_node_status.go:73] "Successfully registered node" node="image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.756741    2378 topology_manager.go:215] "Topology Admit Handler" podUID="393ba20567792b98961d26099d1af933" podNamespace="kube-system" podName="etcd-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.756815    2378 topology_manager.go:215] "Topology Admit Handler" podUID="35180ce0c77b91850e68f1d4913afbe0" podNamespace="kube-system" podName="kube-apiserver-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.756834    2378 topology_manager.go:215] "Topology Admit Handler" podUID="5780a6fd5f90f9fd78ee9f553bec1c7c" podNamespace="kube-system" podName="kube-controller-manager-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.756849    2378 topology_manager.go:215] "Topology Admit Handler" podUID="c17e1036f53c62721610382901467139" podNamespace="kube-system" podName="kube-scheduler-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841828    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/393ba20567792b98961d26099d1af933-etcd-data\") pod \"etcd-image-147000\" (UID: \"393ba20567792b98961d26099d1af933\") " pod="kube-system/etcd-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841880    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35180ce0c77b91850e68f1d4913afbe0-ca-certs\") pod \"kube-apiserver-image-147000\" (UID: \"35180ce0c77b91850e68f1d4913afbe0\") " pod="kube-system/kube-apiserver-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841892    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35180ce0c77b91850e68f1d4913afbe0-k8s-certs\") pod \"kube-apiserver-image-147000\" (UID: \"35180ce0c77b91850e68f1d4913afbe0\") " pod="kube-system/kube-apiserver-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841902    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35180ce0c77b91850e68f1d4913afbe0-usr-share-ca-certificates\") pod \"kube-apiserver-image-147000\" (UID: \"35180ce0c77b91850e68f1d4913afbe0\") " pod="kube-system/kube-apiserver-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841912    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5780a6fd5f90f9fd78ee9f553bec1c7c-kubeconfig\") pod \"kube-controller-manager-image-147000\" (UID: \"5780a6fd5f90f9fd78ee9f553bec1c7c\") " pod="kube-system/kube-controller-manager-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841920    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c17e1036f53c62721610382901467139-kubeconfig\") pod \"kube-scheduler-image-147000\" (UID: \"c17e1036f53c62721610382901467139\") " pod="kube-system/kube-scheduler-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841948    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/393ba20567792b98961d26099d1af933-etcd-certs\") pod \"etcd-image-147000\" (UID: \"393ba20567792b98961d26099d1af933\") " pod="kube-system/etcd-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841958    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5780a6fd5f90f9fd78ee9f553bec1c7c-ca-certs\") pod \"kube-controller-manager-image-147000\" (UID: \"5780a6fd5f90f9fd78ee9f553bec1c7c\") " pod="kube-system/kube-controller-manager-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841967    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5780a6fd5f90f9fd78ee9f553bec1c7c-flexvolume-dir\") pod \"kube-controller-manager-image-147000\" (UID: \"5780a6fd5f90f9fd78ee9f553bec1c7c\") " pod="kube-system/kube-controller-manager-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841976    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5780a6fd5f90f9fd78ee9f553bec1c7c-k8s-certs\") pod \"kube-controller-manager-image-147000\" (UID: \"5780a6fd5f90f9fd78ee9f553bec1c7c\") " pod="kube-system/kube-controller-manager-image-147000"
	Sep 06 19:15:29 image-147000 kubelet[2378]: I0906 19:15:29.841987    2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5780a6fd5f90f9fd78ee9f553bec1c7c-usr-share-ca-certificates\") pod \"kube-controller-manager-image-147000\" (UID: \"5780a6fd5f90f9fd78ee9f553bec1c7c\") " pod="kube-system/kube-controller-manager-image-147000"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.623474    2378 apiserver.go:52] "Watching apiserver"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.640362    2378 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.692458    2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-147000" podStartSLOduration=1.692402259 podCreationTimestamp="2023-09-06 19:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 19:15:30.691802467 +0000 UTC m=+1.108648251" watchObservedRunningTime="2023-09-06 19:15:30.692402259 +0000 UTC m=+1.109248001"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.696113    2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-147000" podStartSLOduration=1.696094967 podCreationTimestamp="2023-09-06 19:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 19:15:30.695495592 +0000 UTC m=+1.112341376" watchObservedRunningTime="2023-09-06 19:15:30.696094967 +0000 UTC m=+1.112940709"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.699455    2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-147000" podStartSLOduration=1.6994385090000002 podCreationTimestamp="2023-09-06 19:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 19:15:30.699319676 +0000 UTC m=+1.116165459" watchObservedRunningTime="2023-09-06 19:15:30.699438509 +0000 UTC m=+1.116284293"
	Sep 06 19:15:30 image-147000 kubelet[2378]: I0906 19:15:30.710612    2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-147000" podStartSLOduration=1.710579176 podCreationTimestamp="2023-09-06 19:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 19:15:30.704671676 +0000 UTC m=+1.121517459" watchObservedRunningTime="2023-09-06 19:15:30.710579176 +0000 UTC m=+1.127424959"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-147000 -n image-147000
helpers_test.go:261: (dbg) Run:  kubectl --context image-147000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-147000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-147000 describe pod storage-provisioner: exit status 1 (39.583916ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-147000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (59.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-192000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-192000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.14540425s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-192000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-192000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [79aa3d98-6959-4a70-837b-301e2c6755c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [79aa3d98-6959-4a70-837b-301e2c6755c0] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.009848708s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-192000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.032005667s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons disable ingress-dns --alsologtostderr -v=1: (10.727282959s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons disable ingress --alsologtostderr -v=1: (7.114475083s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-192000 -n ingress-addon-legacy-192000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-779000 ssh findmnt            | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | -T /mount2                               |                             |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-779000                        | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-779000 ssh pgrep              | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-779000 image build -t         | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	|                | localhost/my-image:functional-779000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-779000 image ls               | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	| delete         | -p functional-779000                     | functional-779000           | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:14 PDT |
	| start          | -p image-147000 --driver=qemu2           | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:14 PDT | 06 Sep 23 12:15 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-147000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-147000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-147000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-147000                          |                             |         |         |                     |                     |
	| delete         | -p image-147000                          | image-147000                | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:15 PDT |
	| start          | -p ingress-addon-legacy-192000           | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:15 PDT | 06 Sep 23 12:16 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-192000              | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:16 PDT | 06 Sep 23 12:16 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-192000              | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:16 PDT | 06 Sep 23 12:16 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-192000              | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:17 PDT | 06 Sep 23 12:17 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-192000 ip           | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:17 PDT | 06 Sep 23 12:17 PDT |
	| addons         | ingress-addon-legacy-192000              | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:17 PDT | 06 Sep 23 12:17 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-192000              | ingress-addon-legacy-192000 | jenkins | v1.31.2 | 06 Sep 23 12:17 PDT | 06 Sep 23 12:17 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:15:32
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:15:32.752593    2213 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:15:32.752729    2213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:15:32.752732    2213 out.go:309] Setting ErrFile to fd 2...
	I0906 12:15:32.752734    2213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:15:32.752849    2213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:15:32.753834    2213 out.go:303] Setting JSON to false
	I0906 12:15:32.768950    2213 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":906,"bootTime":1694026826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:15:32.769026    2213 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:15:32.771669    2213 out.go:177] * [ingress-addon-legacy-192000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:15:32.778605    2213 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:15:32.778659    2213 notify.go:220] Checking for updates...
	I0906 12:15:32.782641    2213 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:15:32.789636    2213 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:15:32.793614    2213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:15:32.796643    2213 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:15:32.800666    2213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:15:32.803799    2213 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:15:32.807611    2213 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:15:32.814642    2213 start.go:298] selected driver: qemu2
	I0906 12:15:32.814651    2213 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:15:32.814658    2213 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:15:32.816753    2213 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:15:32.819568    2213 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:15:32.822699    2213 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:15:32.822726    2213 cni.go:84] Creating CNI manager for ""
	I0906 12:15:32.822734    2213 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:15:32.822740    2213 start_flags.go:321] config:
	{Name:ingress-addon-legacy-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:15:32.827756    2213 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:15:32.833641    2213 out.go:177] * Starting control plane node ingress-addon-legacy-192000 in cluster ingress-addon-legacy-192000
	I0906 12:15:32.837683    2213 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 12:15:32.893325    2213 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0906 12:15:32.893347    2213 cache.go:57] Caching tarball of preloaded images
	I0906 12:15:32.893502    2213 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 12:15:32.897626    2213 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0906 12:15:32.909657    2213 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:15:32.990445    2213 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0906 12:15:39.784901    2213 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:15:39.785037    2213 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:15:40.533652    2213 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0906 12:15:40.533825    2213 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/config.json ...
	I0906 12:15:40.533846    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/config.json: {Name:mk84deb02067b54c3fb37e827a032edcf654f451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:15:40.534084    2213 start.go:365] acquiring machines lock for ingress-addon-legacy-192000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:15:40.534118    2213 start.go:369] acquired machines lock for "ingress-addon-legacy-192000" in 28.083µs
	I0906 12:15:40.534129    2213 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:15:40.534157    2213 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:15:40.539140    2213 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0906 12:15:40.553573    2213 start.go:159] libmachine.API.Create for "ingress-addon-legacy-192000" (driver="qemu2")
	I0906 12:15:40.553594    2213 client.go:168] LocalClient.Create starting
	I0906 12:15:40.553685    2213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:15:40.553712    2213 main.go:141] libmachine: Decoding PEM data...
	I0906 12:15:40.553722    2213 main.go:141] libmachine: Parsing certificate...
	I0906 12:15:40.553760    2213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:15:40.553782    2213 main.go:141] libmachine: Decoding PEM data...
	I0906 12:15:40.553792    2213 main.go:141] libmachine: Parsing certificate...
	I0906 12:15:40.554120    2213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:15:40.726185    2213 main.go:141] libmachine: Creating SSH key...
	I0906 12:15:40.810681    2213 main.go:141] libmachine: Creating Disk image...
	I0906 12:15:40.810687    2213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:15:40.810810    2213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2
	I0906 12:15:40.819161    2213 main.go:141] libmachine: STDOUT: 
	I0906 12:15:40.819188    2213 main.go:141] libmachine: STDERR: 
	I0906 12:15:40.819263    2213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2 +20000M
	I0906 12:15:40.826420    2213 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:15:40.826433    2213 main.go:141] libmachine: STDERR: 
	I0906 12:15:40.826456    2213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2
	I0906 12:15:40.826464    2213 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:15:40.826502    2213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:77:fa:04:ef:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/disk.qcow2
	I0906 12:15:40.860529    2213 main.go:141] libmachine: STDOUT: 
	I0906 12:15:40.860548    2213 main.go:141] libmachine: STDERR: 
	I0906 12:15:40.860553    2213 main.go:141] libmachine: Attempt 0
	I0906 12:15:40.860567    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:40.860629    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:40.860648    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:40.860665    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:40.860672    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:40.860678    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:42.861013    2213 main.go:141] libmachine: Attempt 1
	I0906 12:15:42.861062    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:42.861296    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:42.861340    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:42.861388    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:42.861420    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:42.861448    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:44.863560    2213 main.go:141] libmachine: Attempt 2
	I0906 12:15:44.863598    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:44.863723    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:44.863739    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:44.863752    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:44.863758    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:44.863764    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:46.865786    2213 main.go:141] libmachine: Attempt 3
	I0906 12:15:46.865795    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:46.865841    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:46.865849    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:46.865854    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:46.865865    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:46.865872    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:48.867898    2213 main.go:141] libmachine: Attempt 4
	I0906 12:15:48.867915    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:48.867987    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:48.867998    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:48.868003    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:48.868009    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:48.868015    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:50.870078    2213 main.go:141] libmachine: Attempt 5
	I0906 12:15:50.870102    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:50.870178    2213 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 12:15:50.870189    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:26:1c:6b:d2:0:15 ID:1,26:1c:6b:d2:0:15 Lease:0x64fa213f}
	I0906 12:15:50.870195    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:a1:3b:da:77:14 ID:1,ee:a1:3b:da:77:14 Lease:0x64fa205b}
	I0906 12:15:50.870200    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:b2:a6:d2:d8:8b:16 ID:1,b2:a6:d2:d8:8b:16 Lease:0x64f8cece}
	I0906 12:15:50.870206    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:92:df:5f:68:42:49 ID:1,92:df:5f:68:42:49 Lease:0x64fa200c}
	I0906 12:15:52.872455    2213 main.go:141] libmachine: Attempt 6
	I0906 12:15:52.872622    2213 main.go:141] libmachine: Searching for 8a:77:fa:4:ef:28 in /var/db/dhcpd_leases ...
	I0906 12:15:52.872842    2213 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0906 12:15:52.872867    2213 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:8a:77:fa:4:ef:28 ID:1,8a:77:fa:4:ef:28 Lease:0x64fa2167}
	I0906 12:15:52.872884    2213 main.go:141] libmachine: Found match: 8a:77:fa:4:ef:28
	I0906 12:15:52.872903    2213 main.go:141] libmachine: IP: 192.168.105.6
	I0906 12:15:52.872914    2213 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0906 12:15:54.894457    2213 machine.go:88] provisioning docker machine ...
	I0906 12:15:54.894521    2213 buildroot.go:166] provisioning hostname "ingress-addon-legacy-192000"
	I0906 12:15:54.894729    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:54.895562    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:54.895586    2213 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-192000 && echo "ingress-addon-legacy-192000" | sudo tee /etc/hostname
	I0906 12:15:54.984733    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-192000
	
	I0906 12:15:54.984860    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:54.985296    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:54.985314    2213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-192000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-192000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-192000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:15:55.059675    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:15:55.059693    2213 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17116-1006/.minikube CaCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17116-1006/.minikube}
	I0906 12:15:55.059708    2213 buildroot.go:174] setting up certificates
	I0906 12:15:55.059716    2213 provision.go:83] configureAuth start
	I0906 12:15:55.059721    2213 provision.go:138] copyHostCerts
	I0906 12:15:55.059767    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem
	I0906 12:15:55.059844    2213 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem, removing ...
	I0906 12:15:55.059857    2213 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem
	I0906 12:15:55.060082    2213 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/cert.pem (1123 bytes)
	I0906 12:15:55.060297    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem
	I0906 12:15:55.060333    2213 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem, removing ...
	I0906 12:15:55.060339    2213 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem
	I0906 12:15:55.060415    2213 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/key.pem (1679 bytes)
	I0906 12:15:55.060534    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem
	I0906 12:15:55.060564    2213 exec_runner.go:144] found /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem, removing ...
	I0906 12:15:55.060569    2213 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem
	I0906 12:15:55.060650    2213 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.pem (1078 bytes)
	I0906 12:15:55.060797    2213 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-192000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-192000]
	I0906 12:15:55.138068    2213 provision.go:172] copyRemoteCerts
	I0906 12:15:55.138133    2213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:15:55.138147    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:15:55.172488    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:15:55.172546    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:15:55.179818    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:15:55.179875    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0906 12:15:55.186724    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:15:55.186765    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:15:55.193594    2213 provision.go:86] duration metric: configureAuth took 133.870834ms
	I0906 12:15:55.193606    2213 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:15:55.193701    2213 config.go:182] Loaded profile config "ingress-addon-legacy-192000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 12:15:55.193734    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:55.193950    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:55.193955    2213 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:15:55.255586    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:15:55.255597    2213 buildroot.go:70] root file system type: tmpfs
	I0906 12:15:55.255656    2213 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:15:55.255716    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:55.255968    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:55.256006    2213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:15:55.319017    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:15:55.319091    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:55.319341    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:55.319351    2213 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:15:55.644260    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:15:55.644272    2213 machine.go:91] provisioned docker machine in 749.787583ms
	I0906 12:15:55.644277    2213 client.go:171] LocalClient.Create took 15.090790625s
	I0906 12:15:55.644291    2213 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-192000" took 15.0908315s
	I0906 12:15:55.644298    2213 start.go:300] post-start starting for "ingress-addon-legacy-192000" (driver="qemu2")
	I0906 12:15:55.644303    2213 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:15:55.644366    2213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:15:55.644375    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:15:55.676169    2213 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:15:55.677425    2213 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:15:55.677431    2213 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/addons for local assets ...
	I0906 12:15:55.677499    2213 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17116-1006/.minikube/files for local assets ...
	I0906 12:15:55.677604    2213 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem -> 14212.pem in /etc/ssl/certs
	I0906 12:15:55.677609    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem -> /etc/ssl/certs/14212.pem
	I0906 12:15:55.677718    2213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:15:55.680617    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:15:55.687678    2213 start.go:303] post-start completed in 43.374625ms
	I0906 12:15:55.688097    2213 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/config.json ...
	I0906 12:15:55.688259    2213 start.go:128] duration metric: createHost completed in 15.154209417s
	I0906 12:15:55.688288    2213 main.go:141] libmachine: Using SSH client type: native
	I0906 12:15:55.688500    2213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045c23b0] 0x1045c4e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 12:15:55.688504    2213 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 12:15:55.747518    2213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694027755.475168252
	
	I0906 12:15:55.747529    2213 fix.go:206] guest clock: 1694027755.475168252
	I0906 12:15:55.747533    2213 fix.go:219] Guest: 2023-09-06 12:15:55.475168252 -0700 PDT Remote: 2023-09-06 12:15:55.688264 -0700 PDT m=+22.954968626 (delta=-213.095748ms)
	I0906 12:15:55.747544    2213 fix.go:190] guest clock delta is within tolerance: -213.095748ms
	I0906 12:15:55.747547    2213 start.go:83] releasing machines lock for "ingress-addon-legacy-192000", held for 15.21353575s
	I0906 12:15:55.747816    2213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:15:55.747816    2213 ssh_runner.go:195] Run: cat /version.json
	I0906 12:15:55.747852    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:15:55.747840    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:15:55.783832    2213 ssh_runner.go:195] Run: systemctl --version
	I0906 12:15:55.820943    2213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:15:55.822991    2213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:15:55.823026    2213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0906 12:15:55.825833    2213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0906 12:15:55.831016    2213 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:15:55.831022    2213 start.go:466] detecting cgroup driver to use...
	I0906 12:15:55.831088    2213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:15:55.837800    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0906 12:15:55.841330    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:15:55.844871    2213 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:15:55.844902    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:15:55.848499    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:15:55.851461    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:15:55.854421    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:15:55.857631    2213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:15:55.861107    2213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:15:55.864648    2213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:15:55.867724    2213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:15:55.870357    2213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:55.956536    2213 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:15:55.963699    2213 start.go:466] detecting cgroup driver to use...
	I0906 12:15:55.963765    2213 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:15:55.969011    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:15:55.973922    2213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:15:55.980116    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:15:55.984645    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:15:55.989333    2213 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:15:56.041082    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:15:56.046601    2213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:15:56.052023    2213 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:15:56.053318    2213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:15:56.056379    2213 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:15:56.061737    2213 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:15:56.141860    2213 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:15:56.232654    2213 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:15:56.232668    2213 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 12:15:56.238183    2213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:15:56.297180    2213 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:15:57.453897    2213 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156708709s)
	I0906 12:15:57.453971    2213 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:15:57.463753    2213 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:15:57.479518    2213 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.5 ...
	I0906 12:15:57.479645    2213 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 12:15:57.481001    2213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:15:57.484910    2213 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 12:15:57.484952    2213 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:15:57.490147    2213 docker.go:636] Got preloaded images: 
	I0906 12:15:57.490156    2213 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0906 12:15:57.490191    2213 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:15:57.493045    2213 ssh_runner.go:195] Run: which lz4
	I0906 12:15:57.494160    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0906 12:15:57.494249    2213 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 12:15:57.495485    2213 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:15:57.495499    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0906 12:15:59.187613    2213 docker.go:600] Took 1.693414 seconds to copy over tarball
	I0906 12:15:59.187671    2213 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:16:00.488159    2213 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.30047975s)
	I0906 12:16:00.488175    2213 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:16:00.508949    2213 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:16:00.512978    2213 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0906 12:16:00.518052    2213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:16:00.566589    2213 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:16:02.035845    2213 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.46925075s)
	I0906 12:16:02.035948    2213 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:16:02.041881    2213 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0906 12:16:02.041893    2213 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0906 12:16:02.041896    2213 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 12:16:02.050729    2213 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 12:16:02.050863    2213 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0906 12:16:02.051256    2213 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:02.051722    2213 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 12:16:02.051763    2213 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0906 12:16:02.053483    2213 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 12:16:02.053642    2213 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 12:16:02.053783    2213 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0906 12:16:02.062662    2213 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 12:16:02.062715    2213 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0906 12:16:02.062777    2213 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0906 12:16:02.062814    2213 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:02.062861    2213 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 12:16:02.064829    2213 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 12:16:02.064878    2213 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 12:16:02.064999    2213 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0906 12:16:02.645100    2213 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:02.645236    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0906 12:16:02.651699    2213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0906 12:16:02.651721    2213 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 12:16:02.651772    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0906 12:16:02.657766    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0906 12:16:02.683504    2213 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:02.683615    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0906 12:16:02.689795    2213 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0906 12:16:02.689817    2213 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0906 12:16:02.689866    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0906 12:16:02.696119    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0906 12:16:02.933021    2213 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:02.933132    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0906 12:16:02.940289    2213 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0906 12:16:02.940312    2213 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0906 12:16:02.940361    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0906 12:16:02.947326    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0906 12:16:03.299294    2213 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:03.299403    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 12:16:03.305679    2213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0906 12:16:03.305705    2213 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 12:16:03.305747    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 12:16:03.311622    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0906 12:16:03.501938    2213 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:03.502036    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0906 12:16:03.508752    2213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0906 12:16:03.508774    2213 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 12:16:03.508811    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0906 12:16:03.519652    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0906 12:16:03.629611    2213 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:03.629752    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:03.636224    2213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 12:16:03.636252    2213 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:03.636301    2213 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:03.647192    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0906 12:16:03.722177    2213 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 12:16:03.722305    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0906 12:16:03.728560    2213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0906 12:16:03.728582    2213 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 12:16:03.728625    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0906 12:16:03.734595    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0906 12:16:03.916369    2213 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 12:16:03.934839    2213 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0906 12:16:03.934884    2213 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0906 12:16:03.934991    2213 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0906 12:16:03.946557    2213 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0906 12:16:03.946629    2213 cache_images.go:92] LoadImages completed in 1.904740209s
	W0906 12:16:03.946712    2213 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0906 12:16:03.946808    2213 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:16:03.960392    2213 cni.go:84] Creating CNI manager for ""
	I0906 12:16:03.960418    2213 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:16:03.960435    2213 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 12:16:03.960450    2213 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-192000 NodeName:ingress-addon-legacy-192000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 12:16:03.960562    2213 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-192000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:16:03.960620    2213 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-192000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 12:16:03.960688    2213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0906 12:16:03.965637    2213 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:16:03.965701    2213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:16:03.969684    2213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0906 12:16:03.976005    2213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0906 12:16:03.981498    2213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0906 12:16:03.987081    2213 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0906 12:16:03.988329    2213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:16:03.992126    2213 certs.go:56] Setting up /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000 for IP: 192.168.105.6
	I0906 12:16:03.992136    2213 certs.go:190] acquiring lock for shared ca certs: {Name:mk2fda2e4681223badcda373e6897c8a04d70962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:03.992274    2213 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key
	I0906 12:16:03.992314    2213 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key
	I0906 12:16:03.992340    2213 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key
	I0906 12:16:03.992346    2213 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt with IP's: []
	I0906 12:16:04.206682    2213 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt ...
	I0906 12:16:04.206689    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: {Name:mk1e2d7c5006644ad3c8a70566057704f084b606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.207014    2213 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key ...
	I0906 12:16:04.207018    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key: {Name:mkb4db74026bfdd5dc214ad2db528969d381b717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.207156    2213 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key.b354f644
	I0906 12:16:04.207166    2213 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 12:16:04.407070    2213 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt.b354f644 ...
	I0906 12:16:04.407079    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt.b354f644: {Name:mk0b91414331eb84ddb1275ce8312a7c9198e8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.407266    2213 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key.b354f644 ...
	I0906 12:16:04.407269    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key.b354f644: {Name:mkf7c487717558a983ad10f4ca076587fc2a3683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.407388    2213 certs.go:337] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt
	I0906 12:16:04.407599    2213 certs.go:341] copying /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key
	I0906 12:16:04.407701    2213 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.key
	I0906 12:16:04.407711    2213 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.crt with IP's: []
	I0906 12:16:04.478964    2213 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.crt ...
	I0906 12:16:04.478967    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.crt: {Name:mk308a678e353b2ddc22bc5225d413803dff57db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.479109    2213 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.key ...
	I0906 12:16:04.479112    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.key: {Name:mke75ee48db24340374f3f88bfc93cc266d6380b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:04.479229    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:16:04.479245    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:16:04.479257    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:16:04.479269    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:16:04.479282    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:16:04.479295    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:16:04.479314    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:16:04.479329    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:16:04.479419    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem (1338 bytes)
	W0906 12:16:04.479455    2213 certs.go:433] ignoring /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421_empty.pem, impossibly tiny 0 bytes
	I0906 12:16:04.479471    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:16:04.479502    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:16:04.479527    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:16:04.479561    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/certs/key.pem (1679 bytes)
	I0906 12:16:04.479615    2213 certs.go:437] found cert: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem (1708 bytes)
	I0906 12:16:04.479642    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:16:04.479654    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem -> /usr/share/ca-certificates/1421.pem
	I0906 12:16:04.479664    2213 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem -> /usr/share/ca-certificates/14212.pem
	I0906 12:16:04.480046    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 12:16:04.487988    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:16:04.494742    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:16:04.501488    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:16:04.508542    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:16:04.515312    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:16:04.521812    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:16:04.528991    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:16:04.535878    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:16:04.542388    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/1421.pem --> /usr/share/ca-certificates/1421.pem (1338 bytes)
	I0906 12:16:04.549264    2213 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/ssl/certs/14212.pem --> /usr/share/ca-certificates/14212.pem (1708 bytes)
	I0906 12:16:04.556174    2213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:16:04.561339    2213 ssh_runner.go:195] Run: openssl version
	I0906 12:16:04.563194    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14212.pem && ln -fs /usr/share/ca-certificates/14212.pem /etc/ssl/certs/14212.pem"
	I0906 12:16:04.566221    2213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14212.pem
	I0906 12:16:04.567685    2213 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 19:11 /usr/share/ca-certificates/14212.pem
	I0906 12:16:04.567704    2213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14212.pem
	I0906 12:16:04.569343    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14212.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:16:04.572606    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:16:04.575529    2213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:16:04.576967    2213 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 19:10 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:16:04.576985    2213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:16:04.578756    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:16:04.581918    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1421.pem && ln -fs /usr/share/ca-certificates/1421.pem /etc/ssl/certs/1421.pem"
	I0906 12:16:04.585278    2213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1421.pem
	I0906 12:16:04.586696    2213 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 19:11 /usr/share/ca-certificates/1421.pem
	I0906 12:16:04.586718    2213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1421.pem
	I0906 12:16:04.588497    2213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1421.pem /etc/ssl/certs/51391683.0"
	I0906 12:16:04.591486    2213 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 12:16:04.592672    2213 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 12:16:04.592701    2213 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:16:04.592772    2213 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:16:04.603562    2213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:16:04.606394    2213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:16:04.609543    2213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:16:04.612526    2213 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:16:04.612543    2213 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0906 12:16:04.640193    2213 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0906 12:16:04.640219    2213 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 12:16:04.723201    2213 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:16:04.723257    2213 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:16:04.723302    2213 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:16:04.772911    2213 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:16:04.773426    2213 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:16:04.773487    2213 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 12:16:04.843022    2213 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:16:04.853206    2213 out.go:204]   - Generating certificates and keys ...
	I0906 12:16:04.853248    2213 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 12:16:04.853280    2213 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 12:16:04.930140    2213 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 12:16:05.062613    2213 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 12:16:05.106750    2213 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 12:16:05.183305    2213 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 12:16:05.369342    2213 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 12:16:05.369470    2213 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-192000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0906 12:16:05.452998    2213 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 12:16:05.453067    2213 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-192000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0906 12:16:05.511831    2213 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 12:16:05.577677    2213 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 12:16:05.751346    2213 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 12:16:05.751395    2213 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:16:05.795887    2213 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:16:05.892169    2213 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:16:06.048033    2213 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:16:06.220913    2213 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:16:06.221155    2213 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:16:06.225414    2213 out.go:204]   - Booting up control plane ...
	I0906 12:16:06.225474    2213 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:16:06.225843    2213 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:16:06.226304    2213 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:16:06.226820    2213 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:16:06.228253    2213 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:16:16.739964    2213 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.510987 seconds
	I0906 12:16:16.740265    2213 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:16:16.762561    2213 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:16:17.293651    2213 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:16:17.294019    2213 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-192000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0906 12:16:17.808681    2213 kubeadm.go:322] [bootstrap-token] Using token: cnvltr.h1rkjzf3mnhghgb1
	I0906 12:16:17.812425    2213 out.go:204]   - Configuring RBAC rules ...
	I0906 12:16:17.812609    2213 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:16:17.818261    2213 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:16:17.827608    2213 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:16:17.830081    2213 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:16:17.832604    2213 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:16:17.834656    2213 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:16:17.841424    2213 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:16:18.033703    2213 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 12:16:18.219435    2213 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 12:16:18.219958    2213 kubeadm.go:322] 
	I0906 12:16:18.219994    2213 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 12:16:18.219999    2213 kubeadm.go:322] 
	I0906 12:16:18.220044    2213 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 12:16:18.220049    2213 kubeadm.go:322] 
	I0906 12:16:18.220062    2213 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 12:16:18.220108    2213 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:16:18.220142    2213 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:16:18.220150    2213 kubeadm.go:322] 
	I0906 12:16:18.220179    2213 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 12:16:18.220236    2213 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:16:18.220277    2213 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:16:18.220281    2213 kubeadm.go:322] 
	I0906 12:16:18.220331    2213 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:16:18.220384    2213 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 12:16:18.220390    2213 kubeadm.go:322] 
	I0906 12:16:18.220437    2213 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cnvltr.h1rkjzf3mnhghgb1 \
	I0906 12:16:18.220506    2213 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d \
	I0906 12:16:18.220524    2213 kubeadm.go:322]     --control-plane 
	I0906 12:16:18.220527    2213 kubeadm.go:322] 
	I0906 12:16:18.220570    2213 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:16:18.220580    2213 kubeadm.go:322] 
	I0906 12:16:18.220623    2213 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cnvltr.h1rkjzf3mnhghgb1 \
	I0906 12:16:18.220709    2213 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:17b7f6de3b10bbc20f0186efe5750d1dace064ea3ce551ed11c6083fb754ab3d 
	I0906 12:16:18.220860    2213 kubeadm.go:322] W0906 19:16:04.367851    1407 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 12:16:18.220965    2213 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0906 12:16:18.221060    2213 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
	I0906 12:16:18.221125    2213 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:16:18.221206    2213 kubeadm.go:322] W0906 19:16:05.952879    1407 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 12:16:18.221282    2213 kubeadm.go:322] W0906 19:16:05.953640    1407 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 12:16:18.221289    2213 cni.go:84] Creating CNI manager for ""
	I0906 12:16:18.221298    2213 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:16:18.221309    2213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:16:18.221396    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:18.221396    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138 minikube.k8s.io/name=ingress-addon-legacy-192000 minikube.k8s.io/updated_at=2023_09_06T12_16_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:18.226628    2213 ops.go:34] apiserver oom_adj: -16
	I0906 12:16:18.310530    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:18.343860    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:18.879882    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:19.379966    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:19.879807    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:20.379843    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:20.879865    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:21.379834    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:21.879823    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:22.379818    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:22.879869    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:23.379581    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:23.879890    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:24.379772    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:24.879934    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:25.379783    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:25.879901    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:26.379539    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:26.879866    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:27.379754    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:27.879618    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:28.379801    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:28.879834    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:29.379677    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:29.879704    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:30.379582    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:30.879744    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:31.379699    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:31.879815    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:32.379561    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:32.879546    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:33.379782    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:33.879700    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:34.378429    2213 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:16:34.412989    2213 kubeadm.go:1081] duration metric: took 16.191777625s to wait for elevateKubeSystemPrivileges.
	I0906 12:16:34.413003    2213 kubeadm.go:406] StartCluster complete in 29.820522667s
	I0906 12:16:34.413012    2213 settings.go:142] acquiring lock: {Name:mkdab5683cd98d968361f82dee37aa31492af7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:34.413098    2213 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:16:34.413451    2213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/kubeconfig: {Name:mk69a76938a18011410dd32eccb7fee080824c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:16:34.413685    2213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 12:16:34.413715    2213 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 12:16:34.413756    2213 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-192000"
	I0906 12:16:34.413764    2213 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-192000"
	I0906 12:16:34.413788    2213 host.go:66] Checking if "ingress-addon-legacy-192000" exists ...
	I0906 12:16:34.413787    2213 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-192000"
	I0906 12:16:34.413799    2213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-192000"
	I0906 12:16:34.414210    2213 kapi.go:59] client config for ingress-addon-legacy-192000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key", CAFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10597dd20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:16:34.414569    2213 config.go:182] Loaded profile config "ingress-addon-legacy-192000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 12:16:34.414572    2213 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 12:16:34.415237    2213 kapi.go:59] client config for ingress-addon-legacy-192000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key", CAFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10597dd20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:16:34.418844    2213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:16:34.422591    2213 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:16:34.422597    2213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:16:34.422605    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:16:34.426868    2213 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-192000"
	I0906 12:16:34.426888    2213 host.go:66] Checking if "ingress-addon-legacy-192000" exists ...
	I0906 12:16:34.427576    2213 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:16:34.427582    2213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:16:34.427589    2213 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/ingress-addon-legacy-192000/id_rsa Username:docker}
	I0906 12:16:34.429617    2213 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-192000" context rescaled to 1 replicas
	I0906 12:16:34.429633    2213 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:16:34.432740    2213 out.go:177] * Verifying Kubernetes components...
	I0906 12:16:34.440732    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:16:34.463486    2213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 12:16:34.463842    2213 kapi.go:59] client config for ingress-addon-legacy-192000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.key", CAFile:"/Users/jenkins/minikube-integration/17116-1006/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10597dd20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:16:34.463986    2213 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-192000" to be "Ready" ...
	I0906 12:16:34.465275    2213 node_ready.go:49] node "ingress-addon-legacy-192000" has status "Ready":"True"
	I0906 12:16:34.465281    2213 node_ready.go:38] duration metric: took 1.284625ms waiting for node "ingress-addon-legacy-192000" to be "Ready" ...
	I0906 12:16:34.465292    2213 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:16:34.468900    2213 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.470876    2213 pod_ready.go:92] pod "etcd-ingress-addon-legacy-192000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:16:34.470883    2213 pod_ready.go:81] duration metric: took 1.973584ms waiting for pod "etcd-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.470886    2213 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.472616    2213 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-192000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:16:34.472621    2213 pod_ready.go:81] duration metric: took 1.732083ms waiting for pod "kube-apiserver-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.472624    2213 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.474419    2213 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-192000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:16:34.474427    2213 pod_ready.go:81] duration metric: took 1.799833ms waiting for pod "kube-controller-manager-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.474431    2213 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.475099    2213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:16:34.476322    2213 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-192000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:16:34.476326    2213 pod_ready.go:81] duration metric: took 1.893041ms waiting for pod "kube-scheduler-ingress-addon-legacy-192000" in "kube-system" namespace to be "Ready" ...
	I0906 12:16:34.476330    2213 pod_ready.go:38] duration metric: took 11.031042ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:16:34.476339    2213 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:16:34.476367    2213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:16:34.480357    2213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:16:34.768460    2213 start.go:907] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0906 12:16:34.797234    2213 api_server.go:72] duration metric: took 367.590167ms to wait for apiserver process to appear ...
	I0906 12:16:34.797246    2213 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:16:34.797255    2213 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0906 12:16:34.801424    2213 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0906 12:16:34.801848    2213 api_server.go:141] control plane version: v1.18.20
	I0906 12:16:34.801854    2213 api_server.go:131] duration metric: took 4.605375ms to wait for apiserver health ...
	I0906 12:16:34.801858    2213 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:16:34.805562    2213 system_pods.go:59] 7 kube-system pods found
	I0906 12:16:34.805574    2213 system_pods.go:61] "coredns-66bff467f8-ftsnw" [0f1f4d0b-8bef-4afb-908a-3d8a7cf82865] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:16:34.805578    2213 system_pods.go:61] "etcd-ingress-addon-legacy-192000" [321c3a3b-9b5e-42e7-98d3-833270cd23de] Running
	I0906 12:16:34.805581    2213 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-192000" [2f8fb43f-becb-475c-864c-e3addb417059] Running
	I0906 12:16:34.805583    2213 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-192000" [52fc049e-605a-4154-905f-89da49dfbf2d] Running
	I0906 12:16:34.805587    2213 system_pods.go:61] "kube-proxy-jzgwx" [d1dfa5a0-e186-4e14-b1b1-e0957d341197] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:16:34.805590    2213 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-192000" [178d5f86-1955-4587-8720-b2169a4a7197] Running
	I0906 12:16:34.805592    2213 system_pods.go:61] "storage-provisioner" [e91c1d2e-7966-48bb-9f08-7d70a1ac0411] Pending
	I0906 12:16:34.805596    2213 system_pods.go:74] duration metric: took 3.734375ms to wait for pod list to return data ...
	I0906 12:16:34.805599    2213 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:16:34.809129    2213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 12:16:34.817064    2213 addons.go:502] enable addons completed in 403.353125ms: enabled=[storage-provisioner default-storageclass]
	I0906 12:16:34.866088    2213 request.go:629] Waited for 60.442875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:16:34.869172    2213 default_sa.go:45] found service account: "default"
	I0906 12:16:34.869182    2213 default_sa.go:55] duration metric: took 63.579875ms for default service account to be created ...
	I0906 12:16:34.869186    2213 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:16:35.066071    2213 request.go:629] Waited for 196.845084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0906 12:16:35.071685    2213 system_pods.go:86] 7 kube-system pods found
	I0906 12:16:35.071699    2213 system_pods.go:89] "coredns-66bff467f8-ftsnw" [0f1f4d0b-8bef-4afb-908a-3d8a7cf82865] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:16:35.071703    2213 system_pods.go:89] "etcd-ingress-addon-legacy-192000" [321c3a3b-9b5e-42e7-98d3-833270cd23de] Running
	I0906 12:16:35.071706    2213 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-192000" [2f8fb43f-becb-475c-864c-e3addb417059] Running
	I0906 12:16:35.071708    2213 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-192000" [52fc049e-605a-4154-905f-89da49dfbf2d] Running
	I0906 12:16:35.071712    2213 system_pods.go:89] "kube-proxy-jzgwx" [d1dfa5a0-e186-4e14-b1b1-e0957d341197] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:16:35.071714    2213 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-192000" [178d5f86-1955-4587-8720-b2169a4a7197] Running
	I0906 12:16:35.071718    2213 system_pods.go:89] "storage-provisioner" [e91c1d2e-7966-48bb-9f08-7d70a1ac0411] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:16:35.071736    2213 retry.go:31] will retry after 302.449531ms: missing components: kube-dns, kube-proxy
	I0906 12:16:35.379985    2213 system_pods.go:86] 7 kube-system pods found
	I0906 12:16:35.380000    2213 system_pods.go:89] "coredns-66bff467f8-ftsnw" [0f1f4d0b-8bef-4afb-908a-3d8a7cf82865] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:16:35.380006    2213 system_pods.go:89] "etcd-ingress-addon-legacy-192000" [321c3a3b-9b5e-42e7-98d3-833270cd23de] Running
	I0906 12:16:35.380010    2213 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-192000" [2f8fb43f-becb-475c-864c-e3addb417059] Running
	I0906 12:16:35.380014    2213 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-192000" [52fc049e-605a-4154-905f-89da49dfbf2d] Running
	I0906 12:16:35.380020    2213 system_pods.go:89] "kube-proxy-jzgwx" [d1dfa5a0-e186-4e14-b1b1-e0957d341197] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:16:35.380023    2213 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-192000" [178d5f86-1955-4587-8720-b2169a4a7197] Running
	I0906 12:16:35.380030    2213 system_pods.go:89] "storage-provisioner" [e91c1d2e-7966-48bb-9f08-7d70a1ac0411] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:16:35.380041    2213 retry.go:31] will retry after 358.24866ms: missing components: kube-proxy
	I0906 12:16:35.747699    2213 system_pods.go:86] 7 kube-system pods found
	I0906 12:16:35.747731    2213 system_pods.go:89] "coredns-66bff467f8-ftsnw" [0f1f4d0b-8bef-4afb-908a-3d8a7cf82865] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:16:35.747742    2213 system_pods.go:89] "etcd-ingress-addon-legacy-192000" [321c3a3b-9b5e-42e7-98d3-833270cd23de] Running
	I0906 12:16:35.747757    2213 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-192000" [2f8fb43f-becb-475c-864c-e3addb417059] Running
	I0906 12:16:35.747766    2213 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-192000" [52fc049e-605a-4154-905f-89da49dfbf2d] Running
	I0906 12:16:35.747772    2213 system_pods.go:89] "kube-proxy-jzgwx" [d1dfa5a0-e186-4e14-b1b1-e0957d341197] Running
	I0906 12:16:35.747780    2213 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-192000" [178d5f86-1955-4587-8720-b2169a4a7197] Running
	I0906 12:16:35.747790    2213 system_pods.go:89] "storage-provisioner" [e91c1d2e-7966-48bb-9f08-7d70a1ac0411] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:16:35.747801    2213 system_pods.go:126] duration metric: took 878.61675ms to wait for k8s-apps to be running ...
	I0906 12:16:35.747809    2213 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:16:35.747983    2213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:16:35.760656    2213 system_svc.go:56] duration metric: took 12.8425ms WaitForService to wait for kubelet.
	I0906 12:16:35.760681    2213 kubeadm.go:581] duration metric: took 1.33104425s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 12:16:35.760702    2213 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:16:35.763384    2213 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 12:16:35.763400    2213 node_conditions.go:123] node cpu capacity is 2
	I0906 12:16:35.763410    2213 node_conditions.go:105] duration metric: took 2.702875ms to run NodePressure ...
	I0906 12:16:35.763420    2213 start.go:228] waiting for startup goroutines ...
	I0906 12:16:35.763426    2213 start.go:233] waiting for cluster config update ...
	I0906 12:16:35.763441    2213 start.go:242] writing updated cluster config ...
	I0906 12:16:35.763963    2213 ssh_runner.go:195] Run: rm -f paused
	I0906 12:16:35.805822    2213 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0906 12:16:35.809923    2213 out.go:177] 
	W0906 12:16:35.813957    2213 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0906 12:16:35.817908    2213 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0906 12:16:35.824827    2213 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-192000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 19:15:51 UTC, ends at Wed 2023-09-06 19:17:49 UTC. --
	Sep 06 19:17:19 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:19.010818736Z" level=warning msg="cleaning up after shim disconnected" id=bc59bf309ade51297a37c98a270c72cc5aa02b6a69d0aaf3cac85d85b83892d0 namespace=moby
	Sep 06 19:17:19 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:19.010823112Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:17:19 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:19.010938674Z" level=info msg="ignoring event" container=bc59bf309ade51297a37c98a270c72cc5aa02b6a69d0aaf3cac85d85b83892d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:17:32 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:32.240166580Z" level=info msg="ignoring event" container=12b31b27a8f57d6df88279152cba12e1bae5b81c7c05342480d2f06fe6980ae6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:17:32 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:32.240615670Z" level=info msg="shim disconnected" id=12b31b27a8f57d6df88279152cba12e1bae5b81c7c05342480d2f06fe6980ae6 namespace=moby
	Sep 06 19:17:32 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:32.240804483Z" level=warning msg="cleaning up after shim disconnected" id=12b31b27a8f57d6df88279152cba12e1bae5b81c7c05342480d2f06fe6980ae6 namespace=moby
	Sep 06 19:17:32 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:32.240824360Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.258778434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.259229270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.259256898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.259338281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.301456147Z" level=info msg="shim disconnected" id=461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8 namespace=moby
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.301484941Z" level=warning msg="cleaning up after shim disconnected" id=461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8 namespace=moby
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:36.301489859Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:17:36 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:36.301597994Z" level=info msg="ignoring event" container=461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:44.723970471Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=20d4f68c1a9a24a945262db7caa744e14056eb3e912b31c289e6a6bfea4f520c
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:44.731199019Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=20d4f68c1a9a24a945262db7caa744e14056eb3e912b31c289e6a6bfea4f520c
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.799122066Z" level=info msg="shim disconnected" id=20d4f68c1a9a24a945262db7caa744e14056eb3e912b31c289e6a6bfea4f520c namespace=moby
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:44.799552310Z" level=info msg="ignoring event" container=20d4f68c1a9a24a945262db7caa744e14056eb3e912b31c289e6a6bfea4f520c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.800033307Z" level=warning msg="cleaning up after shim disconnected" id=20d4f68c1a9a24a945262db7caa744e14056eb3e912b31c289e6a6bfea4f520c namespace=moby
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.800080686Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1081]: time="2023-09-06T19:17:44.833691348Z" level=info msg="ignoring event" container=88a76153d835165411e162d608098ef8ea89bf6db355b0ed6cf9bfbb376acf73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.833825734Z" level=info msg="shim disconnected" id=88a76153d835165411e162d608098ef8ea89bf6db355b0ed6cf9bfbb376acf73 namespace=moby
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.833856444Z" level=warning msg="cleaning up after shim disconnected" id=88a76153d835165411e162d608098ef8ea89bf6db355b0ed6cf9bfbb376acf73 namespace=moby
	Sep 06 19:17:44 ingress-addon-legacy-192000 dockerd[1088]: time="2023-09-06T19:17:44.833861111Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	461667aeb808c       a39a074194753                                                                                                      13 seconds ago       Exited              hello-world-app           2                   f19609a944b2c
	28964df69996d       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      40 seconds ago       Running             nginx                     0                   5b256e81fc7fe
	20d4f68c1a9a2       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   59 seconds ago       Exited              controller                0                   88a76153d8351
	dda67527c6f51       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   2a1f14fed7802
	8a896d3d1873d       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   58b5b41189154
	48368df5af8fe       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   78b8c21262cf0
	e127f2c0a901f       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   d90ff08f3359e
	21373d8a65695       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   1b77fc7066415
	8f68cfbaff7b8       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   1c9ba3d362e57
	4a710919215ee       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   8e12ef9baa116
	ce4059681d621       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   bea54146aa611
	b03ec40243747       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   9e66af5a13f43
	
	* 
	* ==> coredns [e127f2c0a901] <==
	* [INFO] 172.17.0.1:33481 - 8618 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039548s
	[INFO] 172.17.0.1:33481 - 2805 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003534s
	[INFO] 172.17.0.1:33481 - 63947 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039465s
	[INFO] 172.17.0.1:33481 - 30236 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053176s
	[INFO] 172.17.0.1:62332 - 28811 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000021753s
	[INFO] 172.17.0.1:62332 - 16132 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009918s
	[INFO] 172.17.0.1:62332 - 11257 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00000871s
	[INFO] 172.17.0.1:62332 - 64275 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008668s
	[INFO] 172.17.0.1:62332 - 52087 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008293s
	[INFO] 172.17.0.1:62332 - 36854 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009793s
	[INFO] 172.17.0.1:62332 - 26754 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010502s
	[INFO] 172.17.0.1:43203 - 9457 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036048s
	[INFO] 172.17.0.1:49751 - 15485 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010668s
	[INFO] 172.17.0.1:43203 - 3529 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009668s
	[INFO] 172.17.0.1:43203 - 37560 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008627s
	[INFO] 172.17.0.1:43203 - 51399 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009751s
	[INFO] 172.17.0.1:43203 - 9296 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003984s
	[INFO] 172.17.0.1:43203 - 22505 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010502s
	[INFO] 172.17.0.1:43203 - 35539 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030797s
	[INFO] 172.17.0.1:49751 - 228 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013627s
	[INFO] 172.17.0.1:49751 - 20237 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011877s
	[INFO] 172.17.0.1:49751 - 56584 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011752s
	[INFO] 172.17.0.1:49751 - 41831 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033173s
	[INFO] 172.17.0.1:49751 - 57140 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010377s
	[INFO] 172.17.0.1:49751 - 42151 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012919s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-192000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-192000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d7e0b4e9bf2e12b4952bdb0ed6ce3c8b866f138
	                    minikube.k8s.io/name=ingress-addon-legacy-192000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T12_16_18_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 19:16:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-192000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 19:17:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 19:17:24 +0000   Wed, 06 Sep 2023 19:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 19:17:24 +0000   Wed, 06 Sep 2023 19:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 19:17:24 +0000   Wed, 06 Sep 2023 19:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 19:17:24 +0000   Wed, 06 Sep 2023 19:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-192000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 02734b47212b42e78b2735891747a616
	  System UUID:                02734b47212b42e78b2735891747a616
	  Boot ID:                    0acf2287-c1d7-43c2-9abd-ba775a8c9369
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-wggrs                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-ftsnw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     75s
	  kube-system                 etcd-ingress-addon-legacy-192000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-ingress-addon-legacy-192000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-192000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-jzgwx                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-ingress-addon-legacy-192000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  97s (x5 over 97s)  kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x4 over 97s)  kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x4 over 97s)  kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasSufficientPID
	  Normal  Starting                 85s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet     Node ingress-addon-legacy-192000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                85s                kubelet     Node ingress-addon-legacy-192000 status is now: NodeReady
	  Normal  Starting                 75s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep 6 19:15] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.664761] EINJ: EINJ table not found.
	[  +0.528440] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044323] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000795] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.097799] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.061570] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.444239] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.187400] systemd-fstab-generator[826]: Ignoring "noauto" for root device
	[  +0.090740] systemd-fstab-generator[837]: Ignoring "noauto" for root device
	[  +0.064293] systemd-fstab-generator[850]: Ignoring "noauto" for root device
	[  +4.266820] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[Sep 6 19:16] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.816797] systemd-fstab-generator[1526]: Ignoring "noauto" for root device
	[  +7.525050] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.076090] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.472851] systemd-fstab-generator[2602]: Ignoring "noauto" for root device
	[ +17.121743] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.473192] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.986480] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep 6 19:17] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [b03ec4024374] <==
	* raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/06 19:16:12 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-06 19:16:12.840353 W | auth: simple token is not cryptographically signed
	2023-09-06 19:16:12.841098 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-06 19:16:12.842754 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-06 19:16:12.843106 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-06 19:16:12.843417 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-06 19:16:12.843513 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-06 19:16:12.843533 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/06 19:16:12 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/06 19:16:12 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-06 19:16:12.953738 I | etcdserver: published {Name:ingress-addon-legacy-192000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-06 19:16:12.959725 I | embed: ready to serve client requests
	2023-09-06 19:16:12.969690 I | embed: ready to serve client requests
	2023-09-06 19:16:13.094733 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-06 19:16:13.263394 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-06 19:16:13.274679 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-06 19:16:13.282695 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-06 19:16:13.314670 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  19:17:49 up 2 min,  0 users,  load average: 0.66, 0.29, 0.11
	Linux ingress-addon-legacy-192000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ce4059681d62] <==
	* I0906 19:16:15.016703       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0906 19:16:15.018062       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0906 19:16:15.093445       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 19:16:15.095739       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:16:15.096030       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:16:15.096052       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0906 19:16:15.117458       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0906 19:16:15.991988       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0906 19:16:15.992071       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 19:16:16.006523       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0906 19:16:16.020681       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0906 19:16:16.020720       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0906 19:16:16.155683       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:16:16.167072       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0906 19:16:16.291897       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0906 19:16:16.292504       1 controller.go:609] quota admission added evaluator for: endpoints
	I0906 19:16:16.294385       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:16:17.304248       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0906 19:16:17.752644       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0906 19:16:17.931362       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0906 19:16:24.140974       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:16:34.401724       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0906 19:16:34.485610       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0906 19:16:36.006510       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0906 19:17:06.162378       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [8f68cfbaff7b] <==
	* I0906 19:16:34.460152       1 shared_informer.go:230] Caches are synced for stateful set 
	I0906 19:16:34.484221       1 shared_informer.go:230] Caches are synced for deployment 
	I0906 19:16:34.487825       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"03a90d2b-c741-45ae-bf19-8acd06e715a3", APIVersion:"apps/v1", ResourceVersion:"321", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0906 19:16:34.506799       1 shared_informer.go:230] Caches are synced for disruption 
	I0906 19:16:34.506816       1 disruption.go:339] Sending events to api server.
	I0906 19:16:34.549342       1 shared_informer.go:230] Caches are synced for HPA 
	I0906 19:16:34.549534       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 19:16:34.550465       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"fa363d03-0ce3-4d75-aaec-134bbd28465c", APIVersion:"apps/v1", ResourceVersion:"333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-ftsnw
	I0906 19:16:34.644890       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 19:16:34.645270       1 shared_informer.go:230] Caches are synced for expand 
	I0906 19:16:34.654737       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 19:16:34.654755       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 19:16:34.654892       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 19:16:34.663578       1 shared_informer.go:230] Caches are synced for attach detach 
	I0906 19:16:34.674567       1 shared_informer.go:230] Caches are synced for PV protection 
	I0906 19:16:34.742587       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0906 19:16:36.009580       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f030cc8d-3b09-40da-a152-ad01d65d420c", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0906 19:16:36.012592       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"829ca493-4e49-45b7-8e5e-1812b0047d32", APIVersion:"batch/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-swlh2
	I0906 19:16:36.013166       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ab378fd3-069c-41d4-a679-a6283fcdceec", APIVersion:"apps/v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dw6jn
	I0906 19:16:36.041534       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4cde257f-8365-46b7-879c-4fadd4787198", APIVersion:"batch/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-g7sjb
	I0906 19:16:39.312240       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"829ca493-4e49-45b7-8e5e-1812b0047d32", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 19:16:40.341920       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4cde257f-8365-46b7-879c-4fadd4787198", APIVersion:"batch/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 19:17:16.407055       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b5eefeea-e74d-4a2a-b9a7-dd3395938202", APIVersion:"apps/v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0906 19:17:16.412773       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"578656fc-80af-4694-b0ec-8630221a5630", APIVersion:"apps/v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-wggrs
	E0906 19:17:47.446469       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-mzfr4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [21373d8a6569] <==
	* W0906 19:16:34.887611       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0906 19:16:34.892728       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0906 19:16:34.892769       1 server_others.go:186] Using iptables Proxier.
	I0906 19:16:34.892916       1 server.go:583] Version: v1.18.20
	I0906 19:16:34.893407       1 config.go:315] Starting service config controller
	I0906 19:16:34.893438       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0906 19:16:34.893514       1 config.go:133] Starting endpoints config controller
	I0906 19:16:34.893532       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0906 19:16:34.993608       1 shared_informer.go:230] Caches are synced for service config 
	I0906 19:16:34.993623       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [4a710919215e] <==
	* W0906 19:16:15.034213       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:16:15.034239       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:16:15.046998       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 19:16:15.047153       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 19:16:15.048931       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0906 19:16:15.049404       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:16:15.049441       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:16:15.049465       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0906 19:16:15.054936       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:16:15.055131       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:16:15.055182       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 19:16:15.055212       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:16:15.055227       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:16:15.063096       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:16:15.063153       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 19:16:15.063205       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:16:15.063253       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:16:15.063311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:16:15.065722       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 19:16:15.065863       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:16:15.893804       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:16:16.004623       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:16:16.006938       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:16:16.056822       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0906 19:16:18.250629       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 19:15:51 UTC, ends at Wed 2023-09-06 19:17:49 UTC. --
	Sep 06 19:17:30 ingress-addon-legacy-192000 kubelet[2608]: E0906 19:17:30.186396    2608 pod_workers.go:191] Error syncing pod b964dbe4-244c-4d59-af3f-3278e6885696 ("kube-ingress-dns-minikube_kube-system(b964dbe4-244c-4d59-af3f-3278e6885696)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b964dbe4-244c-4d59-af3f-3278e6885696)"
	Sep 06 19:17:31 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:31.821696    2608 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ncqkk" (UniqueName: "kubernetes.io/secret/b964dbe4-244c-4d59-af3f-3278e6885696-minikube-ingress-dns-token-ncqkk") pod "b964dbe4-244c-4d59-af3f-3278e6885696" (UID: "b964dbe4-244c-4d59-af3f-3278e6885696")
	Sep 06 19:17:31 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:31.823266    2608 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b964dbe4-244c-4d59-af3f-3278e6885696-minikube-ingress-dns-token-ncqkk" (OuterVolumeSpecName: "minikube-ingress-dns-token-ncqkk") pod "b964dbe4-244c-4d59-af3f-3278e6885696" (UID: "b964dbe4-244c-4d59-af3f-3278e6885696"). InnerVolumeSpecName "minikube-ingress-dns-token-ncqkk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 19:17:31 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:31.921904    2608 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ncqkk" (UniqueName: "kubernetes.io/secret/b964dbe4-244c-4d59-af3f-3278e6885696-minikube-ingress-dns-token-ncqkk") on node "ingress-addon-legacy-192000" DevicePath ""
	Sep 06 19:17:33 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:33.182394    2608 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 719949da7f3aa3ff1341bddbe5a2cf0296a178efb0c9ff410bae9f5806150f6d
	Sep 06 19:17:36 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:36.181940    2608 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bc59bf309ade51297a37c98a270c72cc5aa02b6a69d0aaf3cac85d85b83892d0
	Sep 06 19:17:36 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:36.238158    2608 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wggrs through plugin: invalid network status for
	Sep 06 19:17:36 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:36.313912    2608 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod01a7898f-c847-4368-9dca-bba2dbd126b6/461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8": none of the resources are being tracked.
	Sep 06 19:17:37 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:37.318936    2608 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wggrs through plugin: invalid network status for
	Sep 06 19:17:37 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:37.327005    2608 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bc59bf309ade51297a37c98a270c72cc5aa02b6a69d0aaf3cac85d85b83892d0
	Sep 06 19:17:37 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:37.328874    2608 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8
	Sep 06 19:17:37 ingress-addon-legacy-192000 kubelet[2608]: E0906 19:17:37.329249    2608 pod_workers.go:191] Error syncing pod 01a7898f-c847-4368-9dca-bba2dbd126b6 ("hello-world-app-5f5d8b66bb-wggrs_default(01a7898f-c847-4368-9dca-bba2dbd126b6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wggrs_default(01a7898f-c847-4368-9dca-bba2dbd126b6)"
	Sep 06 19:17:38 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:38.357218    2608 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wggrs through plugin: invalid network status for
	Sep 06 19:17:42 ingress-addon-legacy-192000 kubelet[2608]: E0906 19:17:42.707771    2608 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dw6jn.1782655da4356487", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dw6jn", UID:"d0b15976-df66-4275-abab-4ce31c82fc0c", APIVersion:"v1", ResourceVersion:"413", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-192000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13651f5aa158887, ext:84983697642, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13651f5aa158887, ext:84983697642, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dw6jn.1782655da4356487" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 19:17:42 ingress-addon-legacy-192000 kubelet[2608]: E0906 19:17:42.724463    2608 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dw6jn.1782655da4356487", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dw6jn", UID:"d0b15976-df66-4275-abab-4ce31c82fc0c", APIVersion:"v1", ResourceVersion:"413", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-192000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13651f5aa158887, ext:84983697642, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13651f5ab08b72a, ext:84999634829, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dw6jn.1782655da4356487" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 19:17:45 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:45.458783    2608 pod_container_deletor.go:77] Container "88a76153d835165411e162d608098ef8ea89bf6db355b0ed6cf9bfbb376acf73" not found in pod's containers
	Sep 06 19:17:46 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:46.917559    2608 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-mkqbv" (UniqueName: "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-ingress-nginx-token-mkqbv") pod "d0b15976-df66-4275-abab-4ce31c82fc0c" (UID: "d0b15976-df66-4275-abab-4ce31c82fc0c")
	Sep 06 19:17:46 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:46.918812    2608 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-webhook-cert") pod "d0b15976-df66-4275-abab-4ce31c82fc0c" (UID: "d0b15976-df66-4275-abab-4ce31c82fc0c")
	Sep 06 19:17:46 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:46.929381    2608 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d0b15976-df66-4275-abab-4ce31c82fc0c" (UID: "d0b15976-df66-4275-abab-4ce31c82fc0c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 19:17:46 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:46.929769    2608 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-ingress-nginx-token-mkqbv" (OuterVolumeSpecName: "ingress-nginx-token-mkqbv") pod "d0b15976-df66-4275-abab-4ce31c82fc0c" (UID: "d0b15976-df66-4275-abab-4ce31c82fc0c"). InnerVolumeSpecName "ingress-nginx-token-mkqbv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 19:17:47 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:47.019521    2608 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-webhook-cert") on node "ingress-addon-legacy-192000" DevicePath ""
	Sep 06 19:17:47 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:47.019626    2608 reconciler.go:319] Volume detached for volume "ingress-nginx-token-mkqbv" (UniqueName: "kubernetes.io/secret/d0b15976-df66-4275-abab-4ce31c82fc0c-ingress-nginx-token-mkqbv") on node "ingress-addon-legacy-192000" DevicePath ""
	Sep 06 19:17:48 ingress-addon-legacy-192000 kubelet[2608]: I0906 19:17:48.185302    2608 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 461667aeb808cbee21e662eb2b2bfa156d56db0c8f13c54428ab895e8fd4aac8
	Sep 06 19:17:48 ingress-addon-legacy-192000 kubelet[2608]: E0906 19:17:48.190215    2608 pod_workers.go:191] Error syncing pod 01a7898f-c847-4368-9dca-bba2dbd126b6 ("hello-world-app-5f5d8b66bb-wggrs_default(01a7898f-c847-4368-9dca-bba2dbd126b6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wggrs_default(01a7898f-c847-4368-9dca-bba2dbd126b6)"
	Sep 06 19:17:48 ingress-addon-legacy-192000 kubelet[2608]: W0906 19:17:48.212317    2608 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d0b15976-df66-4275-abab-4ce31c82fc0c/volumes" does not exist
	
	* 
	* ==> storage-provisioner [48368df5af8f] <==
	* I0906 19:16:36.722327       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:16:36.728315       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:16:36.728334       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 19:16:36.731081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 19:16:36.731157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-192000_9e9a9660-9e18-4f40-87e6-06544070cfc3!
	I0906 19:16:36.731409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"223e5c70-7d65-4b83-bf61-d238500d662a", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-192000_9e9a9660-9e18-4f40-87e6-06544070cfc3 became leader
	I0906 19:16:36.831810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-192000_9e9a9660-9e18-4f40-87e6-06544070cfc3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-192000 -n ingress-addon-legacy-192000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-192000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (59.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-730000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-730000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.219685541s)

                                                
                                                
-- stdout --
	* [mount-start-1-730000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-730000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-730000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-730000 -n mount-start-1-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-730000 -n mount-start-1-730000: exit status 7 (69.780417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-122000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-122000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.783564459s)

                                                
                                                
-- stdout --
	* [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-122000 in cluster multinode-122000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:19:59.840974    2515 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:19:59.841138    2515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:19:59.841143    2515 out.go:309] Setting ErrFile to fd 2...
	I0906 12:19:59.841145    2515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:19:59.841242    2515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:19:59.842219    2515 out.go:303] Setting JSON to false
	I0906 12:19:59.857193    2515 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1173,"bootTime":1694026826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:19:59.857266    2515 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:19:59.861362    2515 out.go:177] * [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:19:59.868426    2515 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:19:59.868473    2515 notify.go:220] Checking for updates...
	I0906 12:19:59.872344    2515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:19:59.875392    2515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:19:59.881362    2515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:19:59.884381    2515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:19:59.887377    2515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:19:59.888962    2515 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:19:59.893367    2515 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:19:59.900195    2515 start.go:298] selected driver: qemu2
	I0906 12:19:59.900202    2515 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:19:59.900208    2515 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:19:59.902226    2515 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:19:59.905361    2515 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:19:59.908443    2515 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:19:59.908460    2515 cni.go:84] Creating CNI manager for ""
	I0906 12:19:59.908466    2515 cni.go:136] 0 nodes found, recommending kindnet
	I0906 12:19:59.908469    2515 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 12:19:59.908475    2515 start_flags.go:321] config:
	{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0906 12:19:59.912685    2515 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:19:59.919364    2515 out.go:177] * Starting control plane node multinode-122000 in cluster multinode-122000
	I0906 12:19:59.923333    2515 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:19:59.923351    2515 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:19:59.923361    2515 cache.go:57] Caching tarball of preloaded images
	I0906 12:19:59.923420    2515 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:19:59.923426    2515 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:19:59.923614    2515 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/multinode-122000/config.json ...
	I0906 12:19:59.923628    2515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/multinode-122000/config.json: {Name:mk6c770be98d51564a79b62423317c5b95dfbdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:19:59.923806    2515 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:19:59.923837    2515 start.go:369] acquired machines lock for "multinode-122000" in 25.417µs
	I0906 12:19:59.923847    2515 start.go:93] Provisioning new machine with config: &{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:19:59.923877    2515 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:19:59.932386    2515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:19:59.948048    2515 start.go:159] libmachine.API.Create for "multinode-122000" (driver="qemu2")
	I0906 12:19:59.948073    2515 client.go:168] LocalClient.Create starting
	I0906 12:19:59.948123    2515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:19:59.948158    2515 main.go:141] libmachine: Decoding PEM data...
	I0906 12:19:59.948171    2515 main.go:141] libmachine: Parsing certificate...
	I0906 12:19:59.948209    2515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:19:59.948232    2515 main.go:141] libmachine: Decoding PEM data...
	I0906 12:19:59.948240    2515 main.go:141] libmachine: Parsing certificate...
	I0906 12:19:59.948551    2515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:20:00.064521    2515 main.go:141] libmachine: Creating SSH key...
	I0906 12:20:00.220196    2515 main.go:141] libmachine: Creating Disk image...
	I0906 12:20:00.220207    2515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:20:00.223934    2515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:00.237867    2515 main.go:141] libmachine: STDOUT: 
	I0906 12:20:00.237894    2515 main.go:141] libmachine: STDERR: 
	I0906 12:20:00.237989    2515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2 +20000M
	I0906 12:20:00.247912    2515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:20:00.247931    2515 main.go:141] libmachine: STDERR: 
	I0906 12:20:00.247944    2515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:00.247951    2515 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:20:00.247990    2515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:1b:bb:9c:99:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:00.249709    2515 main.go:141] libmachine: STDOUT: 
	I0906 12:20:00.249723    2515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:20:00.249744    2515 client.go:171] LocalClient.Create took 301.666042ms
	I0906 12:20:02.251886    2515 start.go:128] duration metric: createHost completed in 2.328005667s
	I0906 12:20:02.251939    2515 start.go:83] releasing machines lock for "multinode-122000", held for 2.328109291s
	W0906 12:20:02.251990    2515 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:20:02.259483    2515 out.go:177] * Deleting "multinode-122000" in qemu2 ...
	W0906 12:20:02.280913    2515 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:20:02.280935    2515 start.go:687] Will try again in 5 seconds ...
	I0906 12:20:07.283111    2515 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:20:07.283570    2515 start.go:369] acquired machines lock for "multinode-122000" in 364.958µs
	I0906 12:20:07.283671    2515 start.go:93] Provisioning new machine with config: &{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:20:07.284049    2515 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:20:07.289858    2515 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:20:07.337525    2515 start.go:159] libmachine.API.Create for "multinode-122000" (driver="qemu2")
	I0906 12:20:07.337577    2515 client.go:168] LocalClient.Create starting
	I0906 12:20:07.337723    2515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:20:07.337775    2515 main.go:141] libmachine: Decoding PEM data...
	I0906 12:20:07.337797    2515 main.go:141] libmachine: Parsing certificate...
	I0906 12:20:07.337870    2515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:20:07.337910    2515 main.go:141] libmachine: Decoding PEM data...
	I0906 12:20:07.337924    2515 main.go:141] libmachine: Parsing certificate...
	I0906 12:20:07.338838    2515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:20:07.473950    2515 main.go:141] libmachine: Creating SSH key...
	I0906 12:20:07.535695    2515 main.go:141] libmachine: Creating Disk image...
	I0906 12:20:07.535701    2515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:20:07.535837    2515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:07.544241    2515 main.go:141] libmachine: STDOUT: 
	I0906 12:20:07.544255    2515 main.go:141] libmachine: STDERR: 
	I0906 12:20:07.544297    2515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2 +20000M
	I0906 12:20:07.551391    2515 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:20:07.551400    2515 main.go:141] libmachine: STDERR: 
	I0906 12:20:07.551411    2515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:07.551419    2515 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:20:07.551460    2515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6e:49:7f:97:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:20:07.552911    2515 main.go:141] libmachine: STDOUT: 
	I0906 12:20:07.552925    2515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:20:07.552937    2515 client.go:171] LocalClient.Create took 215.356625ms
	I0906 12:20:09.555135    2515 start.go:128] duration metric: createHost completed in 2.271064041s
	I0906 12:20:09.555232    2515 start.go:83] releasing machines lock for "multinode-122000", held for 2.2716555s
	W0906 12:20:09.555673    2515 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:20:09.566431    2515 out.go:177] 
	W0906 12:20:09.570530    2515 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:20:09.570558    2515 out.go:239] * 
	* 
	W0906 12:20:09.573051    2515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:20:09.583450    2515 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-122000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (66.059583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (107.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.466667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-122000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- rollout status deployment/busybox: exit status 1 (56.769333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.892542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.641041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.580458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.442167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.690958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.129291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0906 12:20:22.058562    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.797541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.305333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.280083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.804708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0906 12:21:43.911130    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:21:50.810549    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:50.816898    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:50.828977    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:50.851264    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:50.893368    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:50.975477    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:51.137609    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:51.459740    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:52.102069    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:53.384414    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
E0906 12:21:55.946699    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.01625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.194584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.579792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.851916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.453875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (29.63625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (107.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-122000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.021667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.912042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-122000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-122000 -v 3 --alsologtostderr: exit status 89 (39.588125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-122000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:57.160669    2605 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:21:57.160880    2605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.160883    2605 out.go:309] Setting ErrFile to fd 2...
	I0906 12:21:57.160885    2605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.160997    2605 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:21:57.161201    2605 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:21:57.161374    2605 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:21:57.165644    2605 out.go:177] * The control plane node must be running for this command
	I0906 12:21:57.168785    2605 out.go:177]   To start a cluster, run: "minikube start -p multinode-122000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-122000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.981292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-122000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-122000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-122000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-122000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.935292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status --output json --alsologtostderr: exit status 7 (28.829708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-122000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:57.332574    2615 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:21:57.332710    2615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.332713    2615 out.go:309] Setting ErrFile to fd 2...
	I0906 12:21:57.332716    2615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.332828    2615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:21:57.332942    2615 out.go:303] Setting JSON to true
	I0906 12:21:57.332953    2615 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:21:57.333036    2615 notify.go:220] Checking for updates...
	I0906 12:21:57.333124    2615 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:21:57.333129    2615 status.go:255] checking status of multinode-122000 ...
	I0906 12:21:57.333336    2615 status.go:330] multinode-122000 host status = "Stopped" (err=<nil>)
	I0906 12:21:57.333339    2615 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:57.333341    2615 status.go:257] multinode-122000 status: &{Name:multinode-122000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-122000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.927083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 node stop m03: exit status 85 (46.199667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-122000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status: exit status 7 (29.263834ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr: exit status 7 (28.56775ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:57.466393    2623 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:21:57.466537    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.466540    2623 out.go:309] Setting ErrFile to fd 2...
	I0906 12:21:57.466542    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.466653    2623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:21:57.466762    2623 out.go:303] Setting JSON to false
	I0906 12:21:57.466777    2623 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:21:57.466829    2623 notify.go:220] Checking for updates...
	I0906 12:21:57.466936    2623 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:21:57.466941    2623 status.go:255] checking status of multinode-122000 ...
	I0906 12:21:57.467123    2623 status.go:330] multinode-122000 host status = "Stopped" (err=<nil>)
	I0906 12:21:57.467126    2623 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:57.467129    2623 status.go:257] multinode-122000 status: &{Name:multinode-122000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr": multinode-122000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (29.20625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 node start m03 --alsologtostderr: exit status 85 (48.398833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:57.525007    2627 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:21:57.525213    2627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.525216    2627 out.go:309] Setting ErrFile to fd 2...
	I0906 12:21:57.525218    2627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.525330    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:21:57.525550    2627 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:21:57.525719    2627 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:21:57.530119    2627 out.go:177] 
	W0906 12:21:57.537117    2627 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0906 12:21:57.537121    2627 out.go:239] * 
	* 
	W0906 12:21:57.538764    2627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:21:57.540284    2627 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0906 12:21:57.525007    2627 out.go:296] Setting OutFile to fd 1 ...
I0906 12:21:57.525213    2627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:21:57.525216    2627 out.go:309] Setting ErrFile to fd 2...
I0906 12:21:57.525218    2627 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:21:57.525330    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:21:57.525550    2627 mustload.go:65] Loading cluster: multinode-122000
I0906 12:21:57.525719    2627 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:21:57.530119    2627 out.go:177] 
W0906 12:21:57.537117    2627 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0906 12:21:57.537121    2627 out.go:239] * 
* 
W0906 12:21:57.538764    2627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 12:21:57.540284    2627 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-122000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status: exit status 7 (29.014708ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-122000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (29.3985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-122000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-122000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-122000 --wait=true -v=8 --alsologtostderr
E0906 12:22:01.069170    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-122000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.177700292s)

                                                
                                                
-- stdout --
	* [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-122000 in cluster multinode-122000
	* Restarting existing qemu2 VM for "multinode-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:57.718909    2637 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:21:57.719027    2637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.719030    2637 out.go:309] Setting ErrFile to fd 2...
	I0906 12:21:57.719032    2637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:57.719142    2637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:21:57.720121    2637 out.go:303] Setting JSON to false
	I0906 12:21:57.735196    2637 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1291,"bootTime":1694026826,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:21:57.735257    2637 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:21:57.739228    2637 out.go:177] * [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:21:57.746137    2637 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:21:57.750133    2637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:21:57.746204    2637 notify.go:220] Checking for updates...
	I0906 12:21:57.753108    2637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:21:57.756116    2637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:21:57.759113    2637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:21:57.762042    2637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:21:57.765370    2637 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:21:57.765422    2637 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:21:57.770122    2637 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:21:57.777125    2637 start.go:298] selected driver: qemu2
	I0906 12:21:57.777132    2637 start.go:902] validating driver "qemu2" against &{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:21:57.777203    2637 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:21:57.779155    2637 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:57.779208    2637 cni.go:84] Creating CNI manager for ""
	I0906 12:21:57.779212    2637 cni.go:136] 1 nodes found, recommending kindnet
	I0906 12:21:57.779219    2637 start_flags.go:321] config:
	{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:21:57.783196    2637 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:21:57.790108    2637 out.go:177] * Starting control plane node multinode-122000 in cluster multinode-122000
	I0906 12:21:57.794134    2637 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:21:57.794151    2637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:21:57.794170    2637 cache.go:57] Caching tarball of preloaded images
	I0906 12:21:57.794223    2637 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:21:57.794228    2637 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:21:57.794286    2637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/multinode-122000/config.json ...
	I0906 12:21:57.794646    2637 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:57.794675    2637 start.go:369] acquired machines lock for "multinode-122000" in 23.167µs
	I0906 12:21:57.794685    2637 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:57.794689    2637 fix.go:54] fixHost starting: 
	I0906 12:21:57.794798    2637 fix.go:102] recreateIfNeeded on multinode-122000: state=Stopped err=<nil>
	W0906 12:21:57.794806    2637 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:21:57.803115    2637 out.go:177] * Restarting existing qemu2 VM for "multinode-122000" ...
	I0906 12:21:57.806935    2637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6e:49:7f:97:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:21:57.808817    2637 main.go:141] libmachine: STDOUT: 
	I0906 12:21:57.808894    2637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:21:57.808920    2637 fix.go:56] fixHost completed within 14.230375ms
	I0906 12:21:57.808924    2637 start.go:83] releasing machines lock for "multinode-122000", held for 14.24525ms
	W0906 12:21:57.808930    2637 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:21:57.808961    2637 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:21:57.808965    2637 start.go:687] Will try again in 5 seconds ...
	I0906 12:22:02.811003    2637 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:02.811420    2637 start.go:369] acquired machines lock for "multinode-122000" in 340.458µs
	I0906 12:22:02.811579    2637 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:22:02.811599    2637 fix.go:54] fixHost starting: 
	I0906 12:22:02.812441    2637 fix.go:102] recreateIfNeeded on multinode-122000: state=Stopped err=<nil>
	W0906 12:22:02.812468    2637 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:22:02.821889    2637 out.go:177] * Restarting existing qemu2 VM for "multinode-122000" ...
	I0906 12:22:02.826013    2637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6e:49:7f:97:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:22:02.835165    2637 main.go:141] libmachine: STDOUT: 
	I0906 12:22:02.835210    2637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:02.835668    2637 fix.go:56] fixHost completed within 24.07175ms
	I0906 12:22:02.835683    2637 start.go:83] releasing machines lock for "multinode-122000", held for 24.2435ms
	W0906 12:22:02.835900    2637 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:02.842846    2637 out.go:177] 
	W0906 12:22:02.846912    2637 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:02.846957    2637 out.go:239] * 
	* 
	W0906 12:22:02.849270    2637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:02.856877    2637 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-122000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-122000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (32.421292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 node delete m03: exit status 89 (38.831625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-122000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-122000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr: exit status 7 (28.578666ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:03.038284    2651 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:22:03.038449    2651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.038452    2651 out.go:309] Setting ErrFile to fd 2...
	I0906 12:22:03.038454    2651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.038560    2651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:22:03.038662    2651 out.go:303] Setting JSON to false
	I0906 12:22:03.038676    2651 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:22:03.038728    2651 notify.go:220] Checking for updates...
	I0906 12:22:03.038850    2651 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:22:03.038855    2651 status.go:255] checking status of multinode-122000 ...
	I0906 12:22:03.039039    2651 status.go:330] multinode-122000 host status = "Stopped" (err=<nil>)
	I0906 12:22:03.039043    2651 status.go:343] host is not running, skipping remaining checks
	I0906 12:22:03.039046    2651 status.go:257] multinode-122000 status: &{Name:multinode-122000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.939083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status: exit status 7 (29.480667ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr: exit status 7 (29.090917ms)

                                                
                                                
-- stdout --
	multinode-122000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:03.186406    2659 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:22:03.186540    2659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.186543    2659 out.go:309] Setting ErrFile to fd 2...
	I0906 12:22:03.186546    2659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.186647    2659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:22:03.186756    2659 out.go:303] Setting JSON to false
	I0906 12:22:03.186768    2659 mustload.go:65] Loading cluster: multinode-122000
	I0906 12:22:03.186877    2659 notify.go:220] Checking for updates...
	I0906 12:22:03.186930    2659 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:22:03.186936    2659 status.go:255] checking status of multinode-122000 ...
	I0906 12:22:03.187134    2659 status.go:330] multinode-122000 host status = "Stopped" (err=<nil>)
	I0906 12:22:03.187138    2659 status.go:343] host is not running, skipping remaining checks
	I0906 12:22:03.187140    2659 status.go:257] multinode-122000 status: &{Name:multinode-122000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr": multinode-122000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-122000 status --alsologtostderr": multinode-122000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (28.722958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-122000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-122000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.17459025s)

                                                
                                                
-- stdout --
	* [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-122000 in cluster multinode-122000
	* Restarting existing qemu2 VM for "multinode-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:03.243300    2663 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:22:03.243409    2663 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.243413    2663 out.go:309] Setting ErrFile to fd 2...
	I0906 12:22:03.243417    2663 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:03.243523    2663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:22:03.244465    2663 out.go:303] Setting JSON to false
	I0906 12:22:03.259339    2663 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1297,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:03.259412    2663 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:03.263405    2663 out.go:177] * [multinode-122000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:22:03.270435    2663 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:22:03.274402    2663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:22:03.270489    2663 notify.go:220] Checking for updates...
	I0906 12:22:03.277402    2663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:03.280402    2663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:03.283334    2663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:22:03.286382    2663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:03.289663    2663 config.go:182] Loaded profile config "multinode-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:22:03.289904    2663 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:22:03.294412    2663 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:22:03.301418    2663 start.go:298] selected driver: qemu2
	I0906 12:22:03.301423    2663 start.go:902] validating driver "qemu2" against &{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:22:03.301488    2663 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:03.303327    2663 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:22:03.303351    2663 cni.go:84] Creating CNI manager for ""
	I0906 12:22:03.303355    2663 cni.go:136] 1 nodes found, recommending kindnet
	I0906 12:22:03.303361    2663 start_flags.go:321] config:
	{Name:multinode-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-122000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:22:03.307282    2663 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:03.314407    2663 out.go:177] * Starting control plane node multinode-122000 in cluster multinode-122000
	I0906 12:22:03.318218    2663 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:22:03.318237    2663 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:22:03.318249    2663 cache.go:57] Caching tarball of preloaded images
	I0906 12:22:03.318308    2663 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:22:03.318313    2663 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:22:03.318380    2663 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/multinode-122000/config.json ...
	I0906 12:22:03.318735    2663 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:03.318763    2663 start.go:369] acquired machines lock for "multinode-122000" in 21.792µs
	I0906 12:22:03.318772    2663 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:22:03.318779    2663 fix.go:54] fixHost starting: 
	I0906 12:22:03.318889    2663 fix.go:102] recreateIfNeeded on multinode-122000: state=Stopped err=<nil>
	W0906 12:22:03.318898    2663 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:22:03.323417    2663 out.go:177] * Restarting existing qemu2 VM for "multinode-122000" ...
	I0906 12:22:03.331411    2663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6e:49:7f:97:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:22:03.333364    2663 main.go:141] libmachine: STDOUT: 
	I0906 12:22:03.333382    2663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:03.333409    2663 fix.go:56] fixHost completed within 14.632375ms
	I0906 12:22:03.333414    2663 start.go:83] releasing machines lock for "multinode-122000", held for 14.647833ms
	W0906 12:22:03.333420    2663 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:03.333449    2663 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:03.333454    2663 start.go:687] Will try again in 5 seconds ...
	I0906 12:22:08.335446    2663 start.go:365] acquiring machines lock for multinode-122000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:08.335824    2663 start.go:369] acquired machines lock for "multinode-122000" in 285.375µs
	I0906 12:22:08.335943    2663 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:22:08.335962    2663 fix.go:54] fixHost starting: 
	I0906 12:22:08.336660    2663 fix.go:102] recreateIfNeeded on multinode-122000: state=Stopped err=<nil>
	W0906 12:22:08.336687    2663 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:22:08.344831    2663 out.go:177] * Restarting existing qemu2 VM for "multinode-122000" ...
	I0906 12:22:08.349091    2663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6e:49:7f:97:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/multinode-122000/disk.qcow2
	I0906 12:22:08.357230    2663 main.go:141] libmachine: STDOUT: 
	I0906 12:22:08.357288    2663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:08.357356    2663 fix.go:56] fixHost completed within 21.391083ms
	I0906 12:22:08.357374    2663 start.go:83] releasing machines lock for "multinode-122000", held for 21.529208ms
	W0906 12:22:08.357553    2663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:08.365008    2663 out.go:177] 
	W0906 12:22:08.369087    2663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:08.369112    2663 out.go:239] * 
	* 
	W0906 12:22:08.371540    2663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:08.378992    2663 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-122000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (69.188542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-122000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-122000-m01 --driver=qemu2 
E0906 12:22:11.311618    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-122000-m01 --driver=qemu2 : exit status 80 (9.967739791s)

                                                
                                                
-- stdout --
	* [multinode-122000-m01] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-122000-m01 in cluster multinode-122000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-122000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-122000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-122000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-122000-m02 --driver=qemu2 : exit status 80 (9.872324417s)

                                                
                                                
-- stdout --
	* [multinode-122000-m02] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-122000-m02 in cluster multinode-122000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-122000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-122000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-122000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-122000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-122000: exit status 89 (79.242417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-122000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-122000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-122000 -n multinode-122000: exit status 7 (29.397958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-122000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-452000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0906 12:22:31.792104    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-452000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.756684s)

                                                
                                                
-- stdout --
	* [test-preload-452000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-452000 in cluster test-preload-452000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:28.702496    2720 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:22:28.702612    2720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:28.702615    2720 out.go:309] Setting ErrFile to fd 2...
	I0906 12:22:28.702624    2720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:28.702752    2720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:22:28.703714    2720 out.go:303] Setting JSON to false
	I0906 12:22:28.718756    2720 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1322,"bootTime":1694026826,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:28.718834    2720 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:28.724483    2720 out.go:177] * [test-preload-452000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:22:28.732484    2720 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:22:28.736513    2720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:22:28.732547    2720 notify.go:220] Checking for updates...
	I0906 12:22:28.740373    2720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:28.743458    2720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:28.746551    2720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:22:28.749436    2720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:28.752807    2720 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:22:28.752851    2720 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:22:28.757466    2720 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:22:28.764467    2720 start.go:298] selected driver: qemu2
	I0906 12:22:28.764479    2720 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:22:28.764486    2720 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:28.766409    2720 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:22:28.769501    2720 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:22:28.772467    2720 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:22:28.772493    2720 cni.go:84] Creating CNI manager for ""
	I0906 12:22:28.772502    2720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:22:28.772512    2720 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:22:28.772519    2720 start_flags.go:321] config:
	{Name:test-preload-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-452000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:22:28.776705    2720 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.783322    2720 out.go:177] * Starting control plane node test-preload-452000 in cluster test-preload-452000
	I0906 12:22:28.787450    2720 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0906 12:22:28.787525    2720 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/test-preload-452000/config.json ...
	I0906 12:22:28.787760    2720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/test-preload-452000/config.json: {Name:mke058dc635c298209e0e31d53ec4cabf4c0b21f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:22:28.787781    2720 cache.go:107] acquiring lock: {Name:mkb5bfb95e12e7b110ffa3b5337b65056a9d05bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.787933    2720 cache.go:107] acquiring lock: {Name:mka77351c32884680d035d46276f38be0a4639cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.787962    2720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:22:28.788014    2720 cache.go:107] acquiring lock: {Name:mk0551a1d7618bd21898cdbf0b49622ac82b8118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788040    2720 cache.go:107] acquiring lock: {Name:mk1bc1ebe088a1deca92ab2cf3fd873d3c16f49e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788075    2720 cache.go:107] acquiring lock: {Name:mk112761d9832686b679dd68975790d638f42ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788084    2720 start.go:365] acquiring machines lock for test-preload-452000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:28.788096    2720 cache.go:107] acquiring lock: {Name:mk9919decd140a37db51a55928e32aff51616000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788151    2720 start.go:369] acquired machines lock for "test-preload-452000" in 50.375µs
	I0906 12:22:28.788186    2720 cache.go:107] acquiring lock: {Name:mk4503eadec25e6d6bb358de409192b6364d63aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788238    2720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 12:22:28.788295    2720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 12:22:28.788309    2720 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:22:28.788188    2720 start.go:93] Provisioning new machine with config: &{Name:test-preload-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-452000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:28.788343    2720 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:28.788389    2720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 12:22:28.788400    2720 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:22:28.793451    2720 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:28.788577    2720 cache.go:107] acquiring lock: {Name:mkd67c32e9802142e63fccc727019ddc69ee48c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:28.788675    2720 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0906 12:22:28.794160    2720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 12:22:28.804462    2720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 12:22:28.804528    2720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:22:28.805063    2720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 12:22:28.805206    2720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:22:28.809714    2720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 12:22:28.809826    2720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 12:22:28.809983    2720 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 12:22:28.810021    2720 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:22:28.810301    2720 start.go:159] libmachine.API.Create for "test-preload-452000" (driver="qemu2")
	I0906 12:22:28.810319    2720 client.go:168] LocalClient.Create starting
	I0906 12:22:28.810399    2720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:22:28.810425    2720 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:28.810438    2720 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:28.810478    2720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:22:28.810497    2720 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:28.810506    2720 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:28.810811    2720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:22:28.925845    2720 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:29.000694    2720 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:29.000703    2720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:29.000839    2720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:29.009260    2720 main.go:141] libmachine: STDOUT: 
	I0906 12:22:29.009277    2720 main.go:141] libmachine: STDERR: 
	I0906 12:22:29.009334    2720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2 +20000M
	I0906 12:22:29.017216    2720 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:29.017239    2720 main.go:141] libmachine: STDERR: 
	I0906 12:22:29.017265    2720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:29.017271    2720 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:29.017322    2720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:e6:3f:69:6f:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:29.019453    2720 main.go:141] libmachine: STDOUT: 
	I0906 12:22:29.019469    2720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:29.019491    2720 client.go:171] LocalClient.Create took 209.1705ms
	I0906 12:22:29.611965    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0906 12:22:29.779550    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0906 12:22:30.009541    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0906 12:22:30.061866    2720 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:22:30.061901    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 12:22:30.177746    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0906 12:22:30.278462    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:22:30.278482    2720 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.490742542s
	I0906 12:22:30.278496    2720 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:22:30.377445    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0906 12:22:30.548477    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 12:22:30.672262    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0906 12:22:30.672276    2720 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.8842915s
	I0906 12:22:30.672285    2720 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0906 12:22:30.763647    2720 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 12:22:30.763692    2720 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 12:22:31.019660    2720 start.go:128] duration metric: createHost completed in 2.231332417s
	I0906 12:22:31.019702    2720 start.go:83] releasing machines lock for "test-preload-452000", held for 2.23159575s
	W0906 12:22:31.019797    2720 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:31.031919    2720 out.go:177] * Deleting "test-preload-452000" in qemu2 ...
	W0906 12:22:31.053386    2720 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:31.053426    2720 start.go:687] Will try again in 5 seconds ...
	I0906 12:22:31.525700    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0906 12:22:31.525758    2720 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.737243s
	I0906 12:22:31.525785    2720 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0906 12:22:33.114263    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0906 12:22:33.114303    2720 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.326233292s
	I0906 12:22:33.114333    2720 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0906 12:22:33.704126    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0906 12:22:33.708595    2720 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.920732917s
	I0906 12:22:33.708633    2720 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0906 12:22:34.011442    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0906 12:22:34.011486    2720 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.223615167s
	I0906 12:22:34.011511    2720 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0906 12:22:35.385780    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0906 12:22:35.385843    2720 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.598135s
	I0906 12:22:35.385878    2720 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0906 12:22:36.053494    2720 start.go:365] acquiring machines lock for test-preload-452000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:36.053949    2720 start.go:369] acquired machines lock for "test-preload-452000" in 376.792µs
	I0906 12:22:36.054077    2720 start.go:93] Provisioning new machine with config: &{Name:test-preload-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-452000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:36.054345    2720 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:36.063771    2720 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:36.111138    2720 start.go:159] libmachine.API.Create for "test-preload-452000" (driver="qemu2")
	I0906 12:22:36.111174    2720 client.go:168] LocalClient.Create starting
	I0906 12:22:36.111293    2720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:22:36.111357    2720 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:36.111384    2720 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:36.111466    2720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:22:36.111505    2720 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:36.111525    2720 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:36.112026    2720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:22:36.237873    2720 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:36.371713    2720 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:36.371719    2720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:36.371865    2720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:36.380389    2720 main.go:141] libmachine: STDOUT: 
	I0906 12:22:36.380405    2720 main.go:141] libmachine: STDERR: 
	I0906 12:22:36.380457    2720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2 +20000M
	I0906 12:22:36.387697    2720 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:36.387716    2720 main.go:141] libmachine: STDERR: 
	I0906 12:22:36.387726    2720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:36.387737    2720 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:36.387781    2720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:c4:05:6e:a4:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/test-preload-452000/disk.qcow2
	I0906 12:22:36.389260    2720 main.go:141] libmachine: STDOUT: 
	I0906 12:22:36.389273    2720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:36.389289    2720 client.go:171] LocalClient.Create took 278.116708ms
	I0906 12:22:37.497714    2720 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0906 12:22:37.497787    2720 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.7099415s
	I0906 12:22:37.497825    2720 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0906 12:22:37.497891    2720 cache.go:87] Successfully saved all images to host disk.
	I0906 12:22:38.391445    2720 start.go:128] duration metric: createHost completed in 2.337105042s
	I0906 12:22:38.391524    2720 start.go:83] releasing machines lock for "test-preload-452000", held for 2.337610709s
	W0906 12:22:38.391882    2720 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:38.402319    2720 out.go:177] 
	W0906 12:22:38.406385    2720 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:38.406412    2720 out.go:239] * 
	* 
	W0906 12:22:38.409159    2720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:38.417344    2720 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-452000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-09-06 12:22:38.435436 -0700 PDT m=+797.698998793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-452000 -n test-preload-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-452000 -n test-preload-452000: exit status 7 (66.153459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-452000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-452000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-601000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-601000 --memory=2048 --driver=qemu2 : exit status 80 (9.759867125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-601000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-601000 in cluster scheduled-stop-601000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-601000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-601000 in cluster scheduled-stop-601000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-09-06 12:22:48.358063 -0700 PDT m=+807.621893626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-601000 -n scheduled-stop-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-601000 -n scheduled-stop-601000: exit status 7 (68.893541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-601000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-601000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (11.81s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1176805869 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-969000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-969000 --memory=2600 --driver=qemu2 : exit status 80 (9.635982875s)

                                                
                                                
-- stdout --
	* [skaffold-969000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-969000 in cluster skaffold-969000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-969000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-969000 in cluster skaffold-969000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-09-06 12:23:00.180473 -0700 PDT m=+819.444622626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-969000 -n skaffold-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-969000 -n skaffold-969000: exit status 7 (62.641167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-969000
--- FAIL: TestSkaffold (11.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-06 12:25:46.444688 -0700 PDT m=+985.713322084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-477000 -n running-upgrade-477000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-477000 -n running-upgrade-477000: exit status 85 (85.258708ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-477000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-477000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-477000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-477000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-477000\"")
helpers_test.go:175: Cleaning up "running-upgrade-477000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-477000
--- FAIL: TestRunningBinaryUpgrade (126.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.665573166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-268000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-268000 in cluster kubernetes-upgrade-268000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-268000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:25:46.794757    3206 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:25:46.794865    3206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:46.794869    3206 out.go:309] Setting ErrFile to fd 2...
	I0906 12:25:46.794872    3206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:46.794977    3206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:25:46.796029    3206 out.go:303] Setting JSON to false
	I0906 12:25:46.811220    3206 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1520,"bootTime":1694026826,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:25:46.811282    3206 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:25:46.816366    3206 out.go:177] * [kubernetes-upgrade-268000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:25:46.823545    3206 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:25:46.823605    3206 notify.go:220] Checking for updates...
	I0906 12:25:46.826488    3206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:25:46.829577    3206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:25:46.832506    3206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:25:46.833869    3206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:25:46.836549    3206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:25:46.839815    3206 config.go:182] Loaded profile config "cert-expiration-096000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:25:46.839882    3206 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:25:46.839918    3206 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:25:46.844334    3206 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:25:46.851501    3206 start.go:298] selected driver: qemu2
	I0906 12:25:46.851506    3206 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:25:46.851512    3206 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:25:46.853308    3206 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:25:46.856547    3206 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:25:46.859549    3206 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:25:46.859572    3206 cni.go:84] Creating CNI manager for ""
	I0906 12:25:46.859587    3206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:25:46.859590    3206 start_flags.go:321] config:
	{Name:kubernetes-upgrade-268000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-268000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:25:46.863594    3206 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:25:46.869430    3206 out.go:177] * Starting control plane node kubernetes-upgrade-268000 in cluster kubernetes-upgrade-268000
	I0906 12:25:46.873461    3206 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:25:46.873480    3206 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:25:46.873492    3206 cache.go:57] Caching tarball of preloaded images
	I0906 12:25:46.873551    3206 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:25:46.873556    3206 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 12:25:46.873840    3206 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kubernetes-upgrade-268000/config.json ...
	I0906 12:25:46.873856    3206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kubernetes-upgrade-268000/config.json: {Name:mkd21a2f492bcd728d322eef1bb85b3c8c1f2b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:25:46.874083    3206 start.go:365] acquiring machines lock for kubernetes-upgrade-268000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:25:46.874115    3206 start.go:369] acquired machines lock for "kubernetes-upgrade-268000" in 23µs
	I0906 12:25:46.874126    3206 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-268000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:25:46.874173    3206 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:25:46.878520    3206 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:25:46.894477    3206 start.go:159] libmachine.API.Create for "kubernetes-upgrade-268000" (driver="qemu2")
	I0906 12:25:46.894501    3206 client.go:168] LocalClient.Create starting
	I0906 12:25:46.894559    3206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:25:46.894584    3206 main.go:141] libmachine: Decoding PEM data...
	I0906 12:25:46.894592    3206 main.go:141] libmachine: Parsing certificate...
	I0906 12:25:46.894630    3206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:25:46.894648    3206 main.go:141] libmachine: Decoding PEM data...
	I0906 12:25:46.894654    3206 main.go:141] libmachine: Parsing certificate...
	I0906 12:25:46.894955    3206 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:25:47.015723    3206 main.go:141] libmachine: Creating SSH key...
	I0906 12:25:47.052028    3206 main.go:141] libmachine: Creating Disk image...
	I0906 12:25:47.052034    3206 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:25:47.052169    3206 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:47.060594    3206 main.go:141] libmachine: STDOUT: 
	I0906 12:25:47.060607    3206 main.go:141] libmachine: STDERR: 
	I0906 12:25:47.060649    3206 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2 +20000M
	I0906 12:25:47.067742    3206 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:25:47.067754    3206 main.go:141] libmachine: STDERR: 
	I0906 12:25:47.067766    3206 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:47.067771    3206 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:25:47.067804    3206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:d0:cf:3f:09:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:47.069284    3206 main.go:141] libmachine: STDOUT: 
	I0906 12:25:47.069310    3206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:25:47.069329    3206 client.go:171] LocalClient.Create took 174.827833ms
	I0906 12:25:49.071446    3206 start.go:128] duration metric: createHost completed in 2.197311834s
	I0906 12:25:49.071502    3206 start.go:83] releasing machines lock for "kubernetes-upgrade-268000", held for 2.1974375s
	W0906 12:25:49.071583    3206 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:25:49.079926    3206 out.go:177] * Deleting "kubernetes-upgrade-268000" in qemu2 ...
	W0906 12:25:49.099953    3206 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:25:49.099985    3206 start.go:687] Will try again in 5 seconds ...
	I0906 12:25:54.102150    3206 start.go:365] acquiring machines lock for kubernetes-upgrade-268000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:25:54.102609    3206 start.go:369] acquired machines lock for "kubernetes-upgrade-268000" in 349.458µs
	I0906 12:25:54.102709    3206 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-268000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:25:54.103025    3206 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:25:54.113946    3206 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:25:54.161509    3206 start.go:159] libmachine.API.Create for "kubernetes-upgrade-268000" (driver="qemu2")
	I0906 12:25:54.161553    3206 client.go:168] LocalClient.Create starting
	I0906 12:25:54.161746    3206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:25:54.161818    3206 main.go:141] libmachine: Decoding PEM data...
	I0906 12:25:54.161841    3206 main.go:141] libmachine: Parsing certificate...
	I0906 12:25:54.161931    3206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:25:54.161969    3206 main.go:141] libmachine: Decoding PEM data...
	I0906 12:25:54.161987    3206 main.go:141] libmachine: Parsing certificate...
	I0906 12:25:54.162561    3206 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:25:54.294210    3206 main.go:141] libmachine: Creating SSH key...
	I0906 12:25:54.374758    3206 main.go:141] libmachine: Creating Disk image...
	I0906 12:25:54.374766    3206 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:25:54.374915    3206 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:54.383604    3206 main.go:141] libmachine: STDOUT: 
	I0906 12:25:54.383617    3206 main.go:141] libmachine: STDERR: 
	I0906 12:25:54.383679    3206 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2 +20000M
	I0906 12:25:54.390874    3206 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:25:54.390895    3206 main.go:141] libmachine: STDERR: 
	I0906 12:25:54.390912    3206 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:54.390920    3206 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:25:54.390957    3206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:94:f4:50:0d:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:54.392525    3206 main.go:141] libmachine: STDOUT: 
	I0906 12:25:54.392537    3206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:25:54.392548    3206 client.go:171] LocalClient.Create took 230.997416ms
	I0906 12:25:56.394649    3206 start.go:128] duration metric: createHost completed in 2.291663625s
	I0906 12:25:56.394709    3206 start.go:83] releasing machines lock for "kubernetes-upgrade-268000", held for 2.2921405s
	W0906 12:25:56.395158    3206 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:25:56.403778    3206 out.go:177] 
	W0906 12:25:56.407713    3206 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:25:56.407752    3206 out.go:239] * 
	* 
	W0906 12:25:56.410251    3206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:25:56.419629    3206 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-268000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-268000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-268000 status --format={{.Host}}: exit status 7 (36.151792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.178149333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-268000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-268000 in cluster kubernetes-upgrade-268000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:25:56.599711    3224 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:25:56.599854    3224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:56.599857    3224 out.go:309] Setting ErrFile to fd 2...
	I0906 12:25:56.599860    3224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:56.599960    3224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:25:56.600935    3224 out.go:303] Setting JSON to false
	I0906 12:25:56.616098    3224 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1530,"bootTime":1694026826,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:25:56.616154    3224 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:25:56.621285    3224 out.go:177] * [kubernetes-upgrade-268000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:25:56.628245    3224 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:25:56.632231    3224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:25:56.628304    3224 notify.go:220] Checking for updates...
	I0906 12:25:56.636252    3224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:25:56.639296    3224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:25:56.642215    3224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:25:56.645213    3224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:25:56.648501    3224 config.go:182] Loaded profile config "kubernetes-upgrade-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 12:25:56.648756    3224 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:25:56.653259    3224 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:25:56.660234    3224 start.go:298] selected driver: qemu2
	I0906 12:25:56.660242    3224 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-268000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:25:56.660327    3224 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:25:56.662766    3224 cni.go:84] Creating CNI manager for ""
	I0906 12:25:56.662781    3224 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:25:56.662785    3224 start_flags.go:321] config:
	{Name:kubernetes-upgrade-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-268000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:25:56.666857    3224 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:25:56.674215    3224 out.go:177] * Starting control plane node kubernetes-upgrade-268000 in cluster kubernetes-upgrade-268000
	I0906 12:25:56.678248    3224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:25:56.678269    3224 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:25:56.678287    3224 cache.go:57] Caching tarball of preloaded images
	I0906 12:25:56.678341    3224 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:25:56.678347    3224 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:25:56.678412    3224 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kubernetes-upgrade-268000/config.json ...
	I0906 12:25:56.678773    3224 start.go:365] acquiring machines lock for kubernetes-upgrade-268000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:25:56.678800    3224 start.go:369] acquired machines lock for "kubernetes-upgrade-268000" in 20.875µs
	I0906 12:25:56.678810    3224 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:25:56.678815    3224 fix.go:54] fixHost starting: 
	I0906 12:25:56.678930    3224 fix.go:102] recreateIfNeeded on kubernetes-upgrade-268000: state=Stopped err=<nil>
	W0906 12:25:56.678939    3224 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:25:56.687248    3224 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-268000" ...
	I0906 12:25:56.690314    3224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:94:f4:50:0d:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:25:56.692218    3224 main.go:141] libmachine: STDOUT: 
	I0906 12:25:56.692235    3224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:25:56.692263    3224 fix.go:56] fixHost completed within 13.448791ms
	I0906 12:25:56.692268    3224 start.go:83] releasing machines lock for "kubernetes-upgrade-268000", held for 13.46525ms
	W0906 12:25:56.692275    3224 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:25:56.692322    3224 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:25:56.692327    3224 start.go:687] Will try again in 5 seconds ...
	I0906 12:26:01.694336    3224 start.go:365] acquiring machines lock for kubernetes-upgrade-268000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:26:01.694786    3224 start.go:369] acquired machines lock for "kubernetes-upgrade-268000" in 344.667µs
	I0906 12:26:01.695000    3224 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:26:01.695023    3224 fix.go:54] fixHost starting: 
	I0906 12:26:01.695823    3224 fix.go:102] recreateIfNeeded on kubernetes-upgrade-268000: state=Stopped err=<nil>
	W0906 12:26:01.695850    3224 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:26:01.700376    3224 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-268000" ...
	I0906 12:26:01.707436    3224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:94:f4:50:0d:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubernetes-upgrade-268000/disk.qcow2
	I0906 12:26:01.716697    3224 main.go:141] libmachine: STDOUT: 
	I0906 12:26:01.716762    3224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:26:01.716867    3224 fix.go:56] fixHost completed within 21.844042ms
	I0906 12:26:01.716894    3224 start.go:83] releasing machines lock for "kubernetes-upgrade-268000", held for 22.086083ms
	W0906 12:26:01.717109    3224 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:26:01.724289    3224 out.go:177] 
	W0906 12:26:01.728425    3224 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:26:01.728477    3224 out.go:239] * 
	* 
	W0906 12:26:01.730855    3224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:26:01.738308    3224 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-268000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-268000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-268000 version --output=json: exit status 1 (64.659541ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-268000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-09-06 12:26:01.816921 -0700 PDT m=+1001.085970209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-268000 -n kubernetes-upgrade-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-268000 -n kubernetes-upgrade-268000: exit status 7 (33.328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-268000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-268000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-268000
--- FAIL: TestKubernetesUpgrade (15.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17116
- KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4180939942/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17116
- KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current580016252/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (115.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (115.72s)

                                                
                                    
x
+
TestPause/serial/Start (9.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-784000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-784000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.725037167s)

                                                
                                                
-- stdout --
	* [pause-784000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-784000 in cluster pause-784000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-784000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-784000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-784000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-784000 -n pause-784000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-784000 -n pause-784000: exit status 7 (67.999125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-784000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 
E0906 12:26:50.802039    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 : exit status 80 (9.724504458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-305000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-305000 in cluster NoKubernetes-305000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000: exit status 7 (70.051334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 : exit status 80 (5.252663875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-305000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-305000
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000: exit status 7 (70.519917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 : exit status 80 (5.24019425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-305000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-305000
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000: exit status 7 (61.456875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 : exit status 80 (5.241032583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-305000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-305000
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-305000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-305000 -n NoKubernetes-305000: exit status 7 (68.692375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0906 12:27:18.511050    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/ingress-addon-legacy-192000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.684696584s)

                                                
                                                
-- stdout --
	* [auto-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-330000 in cluster auto-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:27:16.594729    3349 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:27:16.594834    3349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:16.594838    3349 out.go:309] Setting ErrFile to fd 2...
	I0906 12:27:16.594840    3349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:16.594956    3349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:27:16.595921    3349 out.go:303] Setting JSON to false
	I0906 12:27:16.610801    3349 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1610,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:27:16.610869    3349 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:27:16.615629    3349 out.go:177] * [auto-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:27:16.622581    3349 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:27:16.622639    3349 notify.go:220] Checking for updates...
	I0906 12:27:16.626638    3349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:27:16.629641    3349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:27:16.632602    3349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:27:16.635573    3349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:27:16.638612    3349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:27:16.640326    3349 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:27:16.640362    3349 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:27:16.644532    3349 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:27:16.651367    3349 start.go:298] selected driver: qemu2
	I0906 12:27:16.651371    3349 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:27:16.651376    3349 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:27:16.653263    3349 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:27:16.656637    3349 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:27:16.659652    3349 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:27:16.659669    3349 cni.go:84] Creating CNI manager for ""
	I0906 12:27:16.659676    3349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:27:16.659680    3349 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:27:16.659684    3349 start_flags.go:321] config:
	{Name:auto-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0906 12:27:16.663744    3349 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:27:16.670604    3349 out.go:177] * Starting control plane node auto-330000 in cluster auto-330000
	I0906 12:27:16.674534    3349 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:27:16.674562    3349 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:27:16.674584    3349 cache.go:57] Caching tarball of preloaded images
	I0906 12:27:16.674668    3349 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:27:16.674676    3349 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:27:16.674742    3349 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/auto-330000/config.json ...
	I0906 12:27:16.674755    3349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/auto-330000/config.json: {Name:mk26a163a672b1c5240e53a1726227253c37ac35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:27:16.674972    3349 start.go:365] acquiring machines lock for auto-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:16.675001    3349 start.go:369] acquired machines lock for "auto-330000" in 23.5µs
	I0906 12:27:16.675013    3349 start.go:93] Provisioning new machine with config: &{Name:auto-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:16.675054    3349 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:16.683642    3349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:16.699773    3349 start.go:159] libmachine.API.Create for "auto-330000" (driver="qemu2")
	I0906 12:27:16.699793    3349 client.go:168] LocalClient.Create starting
	I0906 12:27:16.699870    3349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:16.699896    3349 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:16.699909    3349 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:16.699951    3349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:16.699975    3349 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:16.699988    3349 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:16.700318    3349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:16.817278    3349 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:16.848649    3349 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:16.848654    3349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:16.848789    3349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:16.857141    3349 main.go:141] libmachine: STDOUT: 
	I0906 12:27:16.857158    3349 main.go:141] libmachine: STDERR: 
	I0906 12:27:16.857212    3349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2 +20000M
	I0906 12:27:16.864417    3349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:16.864440    3349 main.go:141] libmachine: STDERR: 
	I0906 12:27:16.864458    3349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:16.864464    3349 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:16.864499    3349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:1d:1d:9d:65:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:16.866036    3349 main.go:141] libmachine: STDOUT: 
	I0906 12:27:16.866047    3349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:16.866067    3349 client.go:171] LocalClient.Create took 166.270625ms
	I0906 12:27:18.868169    3349 start.go:128] duration metric: createHost completed in 2.193157791s
	I0906 12:27:18.868468    3349 start.go:83] releasing machines lock for "auto-330000", held for 2.193511208s
	W0906 12:27:18.868537    3349 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:18.878746    3349 out.go:177] * Deleting "auto-330000" in qemu2 ...
	W0906 12:27:18.899076    3349 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:18.899108    3349 start.go:687] Will try again in 5 seconds ...
	I0906 12:27:23.901257    3349 start.go:365] acquiring machines lock for auto-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:23.901774    3349 start.go:369] acquired machines lock for "auto-330000" in 414.208µs
	I0906 12:27:23.901937    3349 start.go:93] Provisioning new machine with config: &{Name:auto-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:23.902271    3349 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:23.911014    3349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:23.958138    3349 start.go:159] libmachine.API.Create for "auto-330000" (driver="qemu2")
	I0906 12:27:23.958205    3349 client.go:168] LocalClient.Create starting
	I0906 12:27:23.958307    3349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:23.958351    3349 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:23.958365    3349 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:23.958439    3349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:23.958473    3349 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:23.958483    3349 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:23.958981    3349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:24.088531    3349 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:24.191953    3349 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:24.191959    3349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:24.192103    3349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:24.200753    3349 main.go:141] libmachine: STDOUT: 
	I0906 12:27:24.200768    3349 main.go:141] libmachine: STDERR: 
	I0906 12:27:24.200834    3349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2 +20000M
	I0906 12:27:24.207944    3349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:24.207955    3349 main.go:141] libmachine: STDERR: 
	I0906 12:27:24.207966    3349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:24.207971    3349 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:24.208006    3349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:11:9f:56:97:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/auto-330000/disk.qcow2
	I0906 12:27:24.209526    3349 main.go:141] libmachine: STDOUT: 
	I0906 12:27:24.209537    3349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:24.209548    3349 client.go:171] LocalClient.Create took 251.346084ms
	I0906 12:27:26.211671    3349 start.go:128] duration metric: createHost completed in 2.309440125s
	I0906 12:27:26.211726    3349 start.go:83] releasing machines lock for "auto-330000", held for 2.309992125s
	W0906 12:27:26.212131    3349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:26.220651    3349 out.go:177] 
	W0906 12:27:26.225813    3349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:27:26.225840    3349 out.go:239] * 
	* 
	W0906 12:27:26.228472    3349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:27:26.237667    3349 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.8493935s)

                                                
                                                
-- stdout --
	* [kindnet-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-330000 in cluster kindnet-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:27:28.350130    3462 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:27:28.350270    3462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:28.350273    3462 out.go:309] Setting ErrFile to fd 2...
	I0906 12:27:28.350275    3462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:28.350385    3462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:27:28.351393    3462 out.go:303] Setting JSON to false
	I0906 12:27:28.366343    3462 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1622,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:27:28.366420    3462 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:27:28.371972    3462 out.go:177] * [kindnet-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:27:28.379846    3462 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:27:28.375799    3462 notify.go:220] Checking for updates...
	I0906 12:27:28.387897    3462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:27:28.390926    3462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:27:28.393911    3462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:27:28.396945    3462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:27:28.399876    3462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:27:28.403270    3462 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:27:28.403307    3462 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:27:28.407815    3462 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:27:28.414867    3462 start.go:298] selected driver: qemu2
	I0906 12:27:28.414874    3462 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:27:28.414882    3462 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:27:28.416868    3462 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:27:28.419847    3462 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:27:28.422897    3462 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:27:28.422915    3462 cni.go:84] Creating CNI manager for "kindnet"
	I0906 12:27:28.422925    3462 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 12:27:28.422932    3462 start_flags.go:321] config:
	{Name:kindnet-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:27:28.426963    3462 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:27:28.431866    3462 out.go:177] * Starting control plane node kindnet-330000 in cluster kindnet-330000
	I0906 12:27:28.435895    3462 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:27:28.435928    3462 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:27:28.435947    3462 cache.go:57] Caching tarball of preloaded images
	I0906 12:27:28.436036    3462 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:27:28.436042    3462 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:27:28.436111    3462 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kindnet-330000/config.json ...
	I0906 12:27:28.436123    3462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kindnet-330000/config.json: {Name:mk28089ee846c88b985cbd308fe9d217cead68d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:27:28.436327    3462 start.go:365] acquiring machines lock for kindnet-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:28.436356    3462 start.go:369] acquired machines lock for "kindnet-330000" in 23.917µs
	I0906 12:27:28.436367    3462 start.go:93] Provisioning new machine with config: &{Name:kindnet-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:28.436398    3462 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:28.444855    3462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:28.459950    3462 start.go:159] libmachine.API.Create for "kindnet-330000" (driver="qemu2")
	I0906 12:27:28.459971    3462 client.go:168] LocalClient.Create starting
	I0906 12:27:28.460041    3462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:28.460067    3462 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:28.460079    3462 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:28.460117    3462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:28.460138    3462 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:28.460148    3462 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:28.460459    3462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:28.574933    3462 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:28.705651    3462 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:28.705659    3462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:28.705800    3462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:28.714771    3462 main.go:141] libmachine: STDOUT: 
	I0906 12:27:28.714782    3462 main.go:141] libmachine: STDERR: 
	I0906 12:27:28.714841    3462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2 +20000M
	I0906 12:27:28.722006    3462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:28.722017    3462 main.go:141] libmachine: STDERR: 
	I0906 12:27:28.722033    3462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:28.722044    3462 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:28.722092    3462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:92:fb:ac:4b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:28.723659    3462 main.go:141] libmachine: STDOUT: 
	I0906 12:27:28.723671    3462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:28.723691    3462 client.go:171] LocalClient.Create took 263.721333ms
	I0906 12:27:30.725829    3462 start.go:128] duration metric: createHost completed in 2.289472917s
	I0906 12:27:30.725898    3462 start.go:83] releasing machines lock for "kindnet-330000", held for 2.289594958s
	W0906 12:27:30.725964    3462 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:30.736226    3462 out.go:177] * Deleting "kindnet-330000" in qemu2 ...
	W0906 12:27:30.758867    3462 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:30.758899    3462 start.go:687] Will try again in 5 seconds ...
	I0906 12:27:35.761052    3462 start.go:365] acquiring machines lock for kindnet-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:35.761609    3462 start.go:369] acquired machines lock for "kindnet-330000" in 428.292µs
	I0906 12:27:35.761746    3462 start.go:93] Provisioning new machine with config: &{Name:kindnet-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:35.762093    3462 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:35.770781    3462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:35.818618    3462 start.go:159] libmachine.API.Create for "kindnet-330000" (driver="qemu2")
	I0906 12:27:35.818654    3462 client.go:168] LocalClient.Create starting
	I0906 12:27:35.818765    3462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:35.818817    3462 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:35.818838    3462 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:35.818910    3462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:35.818945    3462 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:35.818958    3462 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:35.819478    3462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:35.949509    3462 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:36.108912    3462 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:36.108920    3462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:36.109091    3462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:36.118131    3462 main.go:141] libmachine: STDOUT: 
	I0906 12:27:36.118147    3462 main.go:141] libmachine: STDERR: 
	I0906 12:27:36.118226    3462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2 +20000M
	I0906 12:27:36.125472    3462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:36.125483    3462 main.go:141] libmachine: STDERR: 
	I0906 12:27:36.125503    3462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:36.125511    3462 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:36.125548    3462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:75:82:14:1b:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kindnet-330000/disk.qcow2
	I0906 12:27:36.127069    3462 main.go:141] libmachine: STDOUT: 
	I0906 12:27:36.127089    3462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:36.127102    3462 client.go:171] LocalClient.Create took 308.452125ms
	I0906 12:27:38.129233    3462 start.go:128] duration metric: createHost completed in 2.367160792s
	I0906 12:27:38.129326    3462 start.go:83] releasing machines lock for "kindnet-330000", held for 2.367725792s
	W0906 12:27:38.129709    3462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:38.140110    3462 out.go:177] 
	W0906 12:27:38.144421    3462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:27:38.144447    3462 out.go:239] * 
	* 
	W0906 12:27:38.147373    3462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:27:38.157269    3462 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.782224541s)

                                                
                                                
-- stdout --
	* [calico-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-330000 in cluster calico-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:27:40.380074    3578 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:27:40.380185    3578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:40.380188    3578 out.go:309] Setting ErrFile to fd 2...
	I0906 12:27:40.380190    3578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:40.380289    3578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:27:40.381275    3578 out.go:303] Setting JSON to false
	I0906 12:27:40.396236    3578 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1634,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:27:40.396316    3578 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:27:40.400552    3578 out.go:177] * [calico-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:27:40.408400    3578 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:27:40.412464    3578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:27:40.408479    3578 notify.go:220] Checking for updates...
	I0906 12:27:40.416937    3578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:27:40.420521    3578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:27:40.423531    3578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:27:40.426512    3578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:27:40.430141    3578 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:27:40.430196    3578 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:27:40.434499    3578 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:27:40.441402    3578 start.go:298] selected driver: qemu2
	I0906 12:27:40.441407    3578 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:27:40.441414    3578 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:27:40.443448    3578 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:27:40.446559    3578 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:27:40.449634    3578 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:27:40.449661    3578 cni.go:84] Creating CNI manager for "calico"
	I0906 12:27:40.449675    3578 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0906 12:27:40.449681    3578 start_flags.go:321] config:
	{Name:calico-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 12:27:40.454022    3578 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:27:40.461504    3578 out.go:177] * Starting control plane node calico-330000 in cluster calico-330000
	I0906 12:27:40.464389    3578 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:27:40.464409    3578 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:27:40.464422    3578 cache.go:57] Caching tarball of preloaded images
	I0906 12:27:40.464484    3578 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:27:40.464490    3578 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:27:40.464555    3578 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/calico-330000/config.json ...
	I0906 12:27:40.464573    3578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/calico-330000/config.json: {Name:mk323571b4ddfa959ca0ef2c5e668cc9d85d9fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:27:40.464800    3578 start.go:365] acquiring machines lock for calico-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:40.464832    3578 start.go:369] acquired machines lock for "calico-330000" in 25.625µs
	I0906 12:27:40.464843    3578 start.go:93] Provisioning new machine with config: &{Name:calico-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:40.464873    3578 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:40.472509    3578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:40.488611    3578 start.go:159] libmachine.API.Create for "calico-330000" (driver="qemu2")
	I0906 12:27:40.488629    3578 client.go:168] LocalClient.Create starting
	I0906 12:27:40.488685    3578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:40.488714    3578 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:40.488729    3578 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:40.488767    3578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:40.488786    3578 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:40.488793    3578 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:40.489132    3578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:40.606679    3578 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:40.754552    3578 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:40.754559    3578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:40.754720    3578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:40.763552    3578 main.go:141] libmachine: STDOUT: 
	I0906 12:27:40.763574    3578 main.go:141] libmachine: STDERR: 
	I0906 12:27:40.763638    3578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2 +20000M
	I0906 12:27:40.771050    3578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:40.771061    3578 main.go:141] libmachine: STDERR: 
	I0906 12:27:40.771078    3578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:40.771086    3578 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:40.771122    3578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:88:c2:6d:98:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:40.772542    3578 main.go:141] libmachine: STDOUT: 
	I0906 12:27:40.772554    3578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:40.772578    3578 client.go:171] LocalClient.Create took 283.950709ms
	I0906 12:27:42.774711    3578 start.go:128] duration metric: createHost completed in 2.309872958s
	I0906 12:27:42.774797    3578 start.go:83] releasing machines lock for "calico-330000", held for 2.310019083s
	W0906 12:27:42.774868    3578 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:42.787395    3578 out.go:177] * Deleting "calico-330000" in qemu2 ...
	W0906 12:27:42.807769    3578 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:42.807797    3578 start.go:687] Will try again in 5 seconds ...
	I0906 12:27:47.809914    3578 start.go:365] acquiring machines lock for calico-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:47.810503    3578 start.go:369] acquired machines lock for "calico-330000" in 461.334µs
	I0906 12:27:47.810646    3578 start.go:93] Provisioning new machine with config: &{Name:calico-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:47.811001    3578 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:47.821788    3578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:47.870048    3578 start.go:159] libmachine.API.Create for "calico-330000" (driver="qemu2")
	I0906 12:27:47.870095    3578 client.go:168] LocalClient.Create starting
	I0906 12:27:47.870215    3578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:47.870279    3578 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:47.870299    3578 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:47.870379    3578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:47.870414    3578 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:47.870428    3578 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:47.870921    3578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:48.000952    3578 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:48.075964    3578 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:48.075971    3578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:48.076120    3578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:48.084701    3578 main.go:141] libmachine: STDOUT: 
	I0906 12:27:48.084716    3578 main.go:141] libmachine: STDERR: 
	I0906 12:27:48.084785    3578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2 +20000M
	I0906 12:27:48.092181    3578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:48.092223    3578 main.go:141] libmachine: STDERR: 
	I0906 12:27:48.092246    3578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:48.092253    3578 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:48.092299    3578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c8:ff:27:c9:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/calico-330000/disk.qcow2
	I0906 12:27:48.093903    3578 main.go:141] libmachine: STDOUT: 
	I0906 12:27:48.093913    3578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:48.093927    3578 client.go:171] LocalClient.Create took 223.829417ms
	I0906 12:27:50.096078    3578 start.go:128] duration metric: createHost completed in 2.285069167s
	I0906 12:27:50.096139    3578 start.go:83] releasing machines lock for "calico-330000", held for 2.2856675s
	W0906 12:27:50.096523    3578 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:50.104943    3578 out.go:177] 
	W0906 12:27:50.109010    3578 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:27:50.109067    3578 out.go:239] * 
	* 
	W0906 12:27:50.111869    3578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:27:50.120815    3578 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.80419975s)

                                                
                                                
-- stdout --
	* [custom-flannel-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-330000 in cluster custom-flannel-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:27:52.481104    3698 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:27:52.481227    3698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:52.481229    3698 out.go:309] Setting ErrFile to fd 2...
	I0906 12:27:52.481232    3698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:52.481341    3698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:27:52.482325    3698 out.go:303] Setting JSON to false
	I0906 12:27:52.497401    3698 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1646,"bootTime":1694026826,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:27:52.497467    3698 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:27:52.502519    3698 out.go:177] * [custom-flannel-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:27:52.510507    3698 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:27:52.510583    3698 notify.go:220] Checking for updates...
	I0906 12:27:52.517527    3698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:27:52.520440    3698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:27:52.523504    3698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:27:52.526530    3698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:27:52.529405    3698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:27:52.532851    3698 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:27:52.532903    3698 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:27:52.537451    3698 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:27:52.544465    3698 start.go:298] selected driver: qemu2
	I0906 12:27:52.544476    3698 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:27:52.544494    3698 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:27:52.546453    3698 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:27:52.549534    3698 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:27:52.550920    3698 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:27:52.550938    3698 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0906 12:27:52.550947    3698 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0906 12:27:52.550953    3698 start_flags.go:321] config:
	{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:27:52.554796    3698 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:27:52.561549    3698 out.go:177] * Starting control plane node custom-flannel-330000 in cluster custom-flannel-330000
	I0906 12:27:52.565363    3698 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:27:52.565381    3698 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:27:52.565437    3698 cache.go:57] Caching tarball of preloaded images
	I0906 12:27:52.565488    3698 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:27:52.565493    3698 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:27:52.565554    3698 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/custom-flannel-330000/config.json ...
	I0906 12:27:52.565565    3698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/custom-flannel-330000/config.json: {Name:mk2fa6595274e55fa8bc4291f1091b9cbc75ceda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:27:52.565776    3698 start.go:365] acquiring machines lock for custom-flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:52.565804    3698 start.go:369] acquired machines lock for "custom-flannel-330000" in 22.292µs
	I0906 12:27:52.565815    3698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:52.565838    3698 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:52.574492    3698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:52.589183    3698 start.go:159] libmachine.API.Create for "custom-flannel-330000" (driver="qemu2")
	I0906 12:27:52.589213    3698 client.go:168] LocalClient.Create starting
	I0906 12:27:52.589264    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:52.589295    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:52.589308    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:52.589350    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:52.589368    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:52.589376    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:52.589699    3698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:52.704776    3698 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:52.784421    3698 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:52.784427    3698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:52.784565    3698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.792922    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:27:52.792935    3698 main.go:141] libmachine: STDERR: 
	I0906 12:27:52.792978    3698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2 +20000M
	I0906 12:27:52.800075    3698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:52.800084    3698 main.go:141] libmachine: STDERR: 
	I0906 12:27:52.800103    3698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.800107    3698 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:52.800144    3698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f8:1e:78:4f:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.801690    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:27:52.801706    3698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:52.801724    3698 client.go:171] LocalClient.Create took 212.511375ms
	I0906 12:27:54.803831    3698 start.go:128] duration metric: createHost completed in 2.23803375s
	I0906 12:27:54.803897    3698 start.go:83] releasing machines lock for "custom-flannel-330000", held for 2.238144667s
	W0906 12:27:54.803994    3698 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:54.812286    3698 out.go:177] * Deleting "custom-flannel-330000" in qemu2 ...
	W0906 12:27:54.832808    3698 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:54.832829    3698 start.go:687] Will try again in 5 seconds ...
	I0906 12:27:59.834952    3698 start.go:365] acquiring machines lock for custom-flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:59.835347    3698 start.go:369] acquired machines lock for "custom-flannel-330000" in 301.75µs
	I0906 12:27:59.835464    3698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:59.835790    3698 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:59.843522    3698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:59.890287    3698 start.go:159] libmachine.API.Create for "custom-flannel-330000" (driver="qemu2")
	I0906 12:27:59.890330    3698 client.go:168] LocalClient.Create starting
	I0906 12:27:59.890458    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:59.890526    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:59.890549    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:59.890637    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:59.890674    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:59.890690    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:59.891220    3698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:00.113724    3698 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:00.196246    3698 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:00.196257    3698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:00.196410    3698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:28:00.209454    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:28:00.209484    3698 main.go:141] libmachine: STDERR: 
	I0906 12:28:00.209553    3698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2 +20000M
	I0906 12:28:00.217535    3698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:00.217559    3698 main.go:141] libmachine: STDERR: 
	I0906 12:28:00.217585    3698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:28:00.217593    3698 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:00.217633    3698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:5d:15:cd:d0:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:28:00.219297    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:28:00.219317    3698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:00.219334    3698 client.go:171] LocalClient.Create took 329.006875ms
	I0906 12:28:02.221508    3698 start.go:128] duration metric: createHost completed in 2.385745542s
	I0906 12:28:02.221581    3698 start.go:83] releasing machines lock for "custom-flannel-330000", held for 2.386273042s
	W0906 12:28:02.221933    3698 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:02.235477    3698 out.go:177] 
	W0906 12:28:02.239704    3698 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:02.239739    3698 out.go:239] * 
	* 
	W0906 12:28:02.241626    3698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:02.248654    3698 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe: permission denied (6.463625ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe: permission denied (5.443ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe: permission denied (1.704041ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1402533063.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-139000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-139000: exit status 85 (134.009083ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo docker                        | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo cat                           | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo                               | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo find                          | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-330000 sudo crio                          | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-330000                                    | kindnet-330000        | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT | 06 Sep 23 12:27 PDT |
	| start   | -p calico-330000 --memory=3072                       | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=qemu2                          |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo crictl                         | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo crictl                         | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo find                           | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo ip a s                         | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	| ssh     | -p calico-330000 sudo ip r s                         | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo iptables                       | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | -t nat -L -n -v                                      |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo docker                         | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo cat                            | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo                                | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo find                           | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-330000 sudo crio                           | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-330000                                     | calico-330000         | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT | 06 Sep 23 12:27 PDT |
	| start   | -p custom-flannel-330000                             | custom-flannel-330000 | jenkins | v1.31.2 | 06 Sep 23 12:27 PDT |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=qemu2                                       |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:27:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:27:52.481104    3698 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:27:52.481227    3698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:52.481229    3698 out.go:309] Setting ErrFile to fd 2...
	I0906 12:27:52.481232    3698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:27:52.481341    3698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:27:52.482325    3698 out.go:303] Setting JSON to false
	I0906 12:27:52.497401    3698 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1646,"bootTime":1694026826,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:27:52.497467    3698 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:27:52.502519    3698 out.go:177] * [custom-flannel-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:27:52.510507    3698 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:27:52.510583    3698 notify.go:220] Checking for updates...
	I0906 12:27:52.517527    3698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:27:52.520440    3698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:27:52.523504    3698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:27:52.526530    3698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:27:52.529405    3698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:27:52.532851    3698 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:27:52.532903    3698 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:27:52.537451    3698 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:27:52.544465    3698 start.go:298] selected driver: qemu2
	I0906 12:27:52.544476    3698 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:27:52.544494    3698 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:27:52.546453    3698 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:27:52.549534    3698 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:27:52.550920    3698 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:27:52.550938    3698 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0906 12:27:52.550947    3698 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0906 12:27:52.550953    3698 start_flags.go:321] config:
	{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:27:52.554796    3698 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:27:52.561549    3698 out.go:177] * Starting control plane node custom-flannel-330000 in cluster custom-flannel-330000
	I0906 12:27:52.565363    3698 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:27:52.565381    3698 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:27:52.565437    3698 cache.go:57] Caching tarball of preloaded images
	I0906 12:27:52.565488    3698 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:27:52.565493    3698 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:27:52.565554    3698 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/custom-flannel-330000/config.json ...
	I0906 12:27:52.565565    3698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/custom-flannel-330000/config.json: {Name:mk2fa6595274e55fa8bc4291f1091b9cbc75ceda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:27:52.565776    3698 start.go:365] acquiring machines lock for custom-flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:52.565804    3698 start.go:369] acquired machines lock for "custom-flannel-330000" in 22.292µs
	I0906 12:27:52.565815    3698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:52.565838    3698 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:27:52.574492    3698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:27:52.589183    3698 start.go:159] libmachine.API.Create for "custom-flannel-330000" (driver="qemu2")
	I0906 12:27:52.589213    3698 client.go:168] LocalClient.Create starting
	I0906 12:27:52.589264    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:27:52.589295    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:52.589308    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:52.589350    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:27:52.589368    3698 main.go:141] libmachine: Decoding PEM data...
	I0906 12:27:52.589376    3698 main.go:141] libmachine: Parsing certificate...
	I0906 12:27:52.589699    3698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:27:52.704776    3698 main.go:141] libmachine: Creating SSH key...
	I0906 12:27:52.784421    3698 main.go:141] libmachine: Creating Disk image...
	I0906 12:27:52.784427    3698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:27:52.784565    3698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.792922    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:27:52.792935    3698 main.go:141] libmachine: STDERR: 
	I0906 12:27:52.792978    3698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2 +20000M
	I0906 12:27:52.800075    3698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:27:52.800084    3698 main.go:141] libmachine: STDERR: 
	I0906 12:27:52.800103    3698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.800107    3698 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:27:52.800144    3698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f8:1e:78:4f:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/custom-flannel-330000/disk.qcow2
	I0906 12:27:52.801690    3698 main.go:141] libmachine: STDOUT: 
	I0906 12:27:52.801706    3698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:27:52.801724    3698 client.go:171] LocalClient.Create took 212.511375ms
	I0906 12:27:54.803831    3698 start.go:128] duration metric: createHost completed in 2.23803375s
	I0906 12:27:54.803897    3698 start.go:83] releasing machines lock for "custom-flannel-330000", held for 2.238144667s
	W0906 12:27:54.803994    3698 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:54.812286    3698 out.go:177] * Deleting "custom-flannel-330000" in qemu2 ...
	W0906 12:27:54.832808    3698 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:27:54.832829    3698 start.go:687] Will try again in 5 seconds ...
	I0906 12:27:59.834952    3698 start.go:365] acquiring machines lock for custom-flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:27:59.835347    3698 start.go:369] acquired machines lock for "custom-flannel-330000" in 301.75µs
	I0906 12:27:59.835464    3698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:27:59.835790    3698 start.go:125] createHost starting for "" (driver="qemu2")
	
	* 
	* Profile "stopped-upgrade-139000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-139000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (11.467992084s)

                                                
                                                
-- stdout --
	* [false-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-330000 in cluster false-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:00.396409    3733 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:00.396529    3733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:00.396532    3733 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:00.396535    3733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:00.396647    3733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:00.397673    3733 out.go:303] Setting JSON to false
	I0906 12:28:00.412750    3733 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1654,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:00.412838    3733 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:00.417022    3733 out.go:177] * [false-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:00.424009    3733 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:00.424057    3733 notify.go:220] Checking for updates...
	I0906 12:28:00.430958    3733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:00.433979    3733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:00.436960    3733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:00.440001    3733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:00.442906    3733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:00.446317    3733 config.go:182] Loaded profile config "custom-flannel-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:00.446387    3733 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:00.446439    3733 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:00.450967    3733 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:00.457977    3733 start.go:298] selected driver: qemu2
	I0906 12:28:00.457990    3733 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:00.457996    3733 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:00.459967    3733 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:00.462960    3733 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:00.466000    3733 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:00.466035    3733 cni.go:84] Creating CNI manager for "false"
	I0906 12:28:00.466040    3733 start_flags.go:321] config:
	{Name:false-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0906 12:28:00.470487    3733 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:00.475042    3733 out.go:177] * Starting control plane node false-330000 in cluster false-330000
	I0906 12:28:00.482985    3733 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:00.483011    3733 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:00.483036    3733 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:00.483114    3733 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:00.483122    3733 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:00.483196    3733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/false-330000/config.json ...
	I0906 12:28:00.483208    3733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/false-330000/config.json: {Name:mk670a20422dcdeb679a394be252a18b69d69334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:00.483411    3733 start.go:365] acquiring machines lock for false-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:02.221746    3733 start.go:369] acquired machines lock for "false-330000" in 1.73834425s
	I0906 12:28:02.221937    3733 start.go:93] Provisioning new machine with config: &{Name:false-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:02.222262    3733 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:02.231641    3733 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:02.277324    3733 start.go:159] libmachine.API.Create for "false-330000" (driver="qemu2")
	I0906 12:28:02.277368    3733 client.go:168] LocalClient.Create starting
	I0906 12:28:02.277549    3733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:02.277598    3733 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:02.277619    3733 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:02.277693    3733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:02.277721    3733 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:02.277741    3733 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:02.278374    3733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:02.408298    3733 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:02.511316    3733 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:02.511329    3733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:02.511483    3733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:02.520602    3733 main.go:141] libmachine: STDOUT: 
	I0906 12:28:02.520632    3733 main.go:141] libmachine: STDERR: 
	I0906 12:28:02.520712    3733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2 +20000M
	I0906 12:28:02.529093    3733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:02.529108    3733 main.go:141] libmachine: STDERR: 
	I0906 12:28:02.529140    3733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:02.529149    3733 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:02.529197    3733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:5f:56:a7:01:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:02.531057    3733 main.go:141] libmachine: STDOUT: 
	I0906 12:28:02.531070    3733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:02.531097    3733 client.go:171] LocalClient.Create took 253.728958ms
	I0906 12:28:04.532341    3733 start.go:128] duration metric: createHost completed in 2.310130209s
	I0906 12:28:04.532353    3733 start.go:83] releasing machines lock for "false-330000", held for 2.310640084s
	W0906 12:28:04.532370    3733 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:04.536379    3733 out.go:177] * Deleting "false-330000" in qemu2 ...
	W0906 12:28:04.551530    3733 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:04.551536    3733 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:09.553670    3733 start.go:365] acquiring machines lock for false-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:09.554221    3733 start.go:369] acquired machines lock for "false-330000" in 390.084µs
	I0906 12:28:09.554383    3733 start.go:93] Provisioning new machine with config: &{Name:false-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:09.554766    3733 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:09.563433    3733 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:09.611916    3733 start.go:159] libmachine.API.Create for "false-330000" (driver="qemu2")
	I0906 12:28:09.611962    3733 client.go:168] LocalClient.Create starting
	I0906 12:28:09.612136    3733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:09.612212    3733 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:09.612230    3733 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:09.612300    3733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:09.612336    3733 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:09.612350    3733 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:09.612792    3733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:09.741859    3733 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:09.778708    3733 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:09.778714    3733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:09.778854    3733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:09.787263    3733 main.go:141] libmachine: STDOUT: 
	I0906 12:28:09.787281    3733 main.go:141] libmachine: STDERR: 
	I0906 12:28:09.787358    3733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2 +20000M
	I0906 12:28:09.794490    3733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:09.794505    3733 main.go:141] libmachine: STDERR: 
	I0906 12:28:09.794519    3733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:09.794525    3733 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:09.794566    3733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:62:45:d9:28:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/false-330000/disk.qcow2
	I0906 12:28:09.796073    3733 main.go:141] libmachine: STDOUT: 
	I0906 12:28:09.796090    3733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:09.796101    3733 client.go:171] LocalClient.Create took 184.136417ms
	I0906 12:28:11.797700    3733 start.go:128] duration metric: createHost completed in 2.242958125s
	I0906 12:28:11.797786    3733 start.go:83] releasing machines lock for "false-330000", held for 2.24358075s
	W0906 12:28:11.798250    3733 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:11.808909    3733 out.go:177] 
	W0906 12:28:11.813092    3733 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:11.813114    3733 out.go:239] * 
	* 
	W0906 12:28:11.815835    3733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:11.823919    3733 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.733195125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-330000 in cluster enable-default-cni-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:04.593533    3847 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:04.593669    3847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:04.593672    3847 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:04.593674    3847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:04.593799    3847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:04.594827    3847 out.go:303] Setting JSON to false
	I0906 12:28:04.609897    3847 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1658,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:04.609972    3847 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:04.614353    3847 out.go:177] * [enable-default-cni-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:04.622372    3847 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:04.626292    3847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:04.622450    3847 notify.go:220] Checking for updates...
	I0906 12:28:04.632363    3847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:04.635373    3847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:04.638330    3847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:04.641401    3847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:04.644652    3847 config.go:182] Loaded profile config "false-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:04.644719    3847 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:04.644757    3847 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:04.649311    3847 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:04.656275    3847 start.go:298] selected driver: qemu2
	I0906 12:28:04.656281    3847 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:04.656288    3847 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:04.658316    3847 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:04.661366    3847 out.go:177] * Automatically selected the socket_vmnet network
	E0906 12:28:04.664481    3847 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0906 12:28:04.664494    3847 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:04.664523    3847 cni.go:84] Creating CNI manager for "bridge"
	I0906 12:28:04.664528    3847 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:28:04.664534    3847 start_flags.go:321] config:
	{Name:enable-default-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:04.668635    3847 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:04.676334    3847 out.go:177] * Starting control plane node enable-default-cni-330000 in cluster enable-default-cni-330000
	I0906 12:28:04.680379    3847 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:04.680401    3847 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:04.680417    3847 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:04.680479    3847 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:04.680484    3847 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:04.680559    3847 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/enable-default-cni-330000/config.json ...
	I0906 12:28:04.680575    3847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/enable-default-cni-330000/config.json: {Name:mk17160b73b65670f02030dba289f6507c8dc768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:04.680793    3847 start.go:365] acquiring machines lock for enable-default-cni-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:04.680828    3847 start.go:369] acquired machines lock for "enable-default-cni-330000" in 25.958µs
	I0906 12:28:04.680843    3847 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:04.680875    3847 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:04.688344    3847 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:04.704387    3847 start.go:159] libmachine.API.Create for "enable-default-cni-330000" (driver="qemu2")
	I0906 12:28:04.704414    3847 client.go:168] LocalClient.Create starting
	I0906 12:28:04.704464    3847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:04.704491    3847 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:04.704502    3847 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:04.704539    3847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:04.704557    3847 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:04.704566    3847 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:04.704883    3847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:04.821304    3847 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:04.907054    3847 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:04.907062    3847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:04.907220    3847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:04.915678    3847 main.go:141] libmachine: STDOUT: 
	I0906 12:28:04.915691    3847 main.go:141] libmachine: STDERR: 
	I0906 12:28:04.915737    3847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2 +20000M
	I0906 12:28:04.922852    3847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:04.922866    3847 main.go:141] libmachine: STDERR: 
	I0906 12:28:04.922883    3847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:04.922894    3847 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:04.922930    3847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b9:e1:9d:2d:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:04.924434    3847 main.go:141] libmachine: STDOUT: 
	I0906 12:28:04.924447    3847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:04.924466    3847 client.go:171] LocalClient.Create took 220.052ms
	I0906 12:28:06.926578    3847 start.go:128] duration metric: createHost completed in 2.245746875s
	I0906 12:28:06.926662    3847 start.go:83] releasing machines lock for "enable-default-cni-330000", held for 2.245861041s
	W0906 12:28:06.926770    3847 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:06.934134    3847 out.go:177] * Deleting "enable-default-cni-330000" in qemu2 ...
	W0906 12:28:06.960351    3847 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:06.960377    3847 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:11.960432    3847 start.go:365] acquiring machines lock for enable-default-cni-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:11.960513    3847 start.go:369] acquired machines lock for "enable-default-cni-330000" in 59.083µs
	I0906 12:28:11.960543    3847 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:11.960593    3847 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:11.968804    3847 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:11.982754    3847 start.go:159] libmachine.API.Create for "enable-default-cni-330000" (driver="qemu2")
	I0906 12:28:11.982779    3847 client.go:168] LocalClient.Create starting
	I0906 12:28:11.982838    3847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:11.982873    3847 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:11.982884    3847 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:11.982920    3847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:11.982934    3847 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:11.982941    3847 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:11.983234    3847 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:12.104493    3847 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:12.239386    3847 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:12.239394    3847 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:12.239534    3847 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:12.251928    3847 main.go:141] libmachine: STDOUT: 
	I0906 12:28:12.251952    3847 main.go:141] libmachine: STDERR: 
	I0906 12:28:12.252038    3847 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2 +20000M
	I0906 12:28:12.259864    3847 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:12.259884    3847 main.go:141] libmachine: STDERR: 
	I0906 12:28:12.259907    3847 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:12.259924    3847 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:12.259963    3847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:49:5d:1c:8b:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/enable-default-cni-330000/disk.qcow2
	I0906 12:28:12.261706    3847 main.go:141] libmachine: STDOUT: 
	I0906 12:28:12.261720    3847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:12.261735    3847 client.go:171] LocalClient.Create took 278.959958ms
	I0906 12:28:14.263861    3847 start.go:128] duration metric: createHost completed in 2.303308792s
	I0906 12:28:14.263941    3847 start.go:83] releasing machines lock for "enable-default-cni-330000", held for 2.303479375s
	W0906 12:28:14.264347    3847 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:14.275866    3847 out.go:177] 
	W0906 12:28:14.278957    3847 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:14.279075    3847 out.go:239] * 
	* 
	W0906 12:28:14.282128    3847 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:14.289733    3847 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.966800583s)

                                                
                                                
-- stdout --
	* [flannel-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-330000 in cluster flannel-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:13.995580    3964 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:13.995708    3964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:13.995711    3964 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:13.995713    3964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:13.995818    3964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:13.996849    3964 out.go:303] Setting JSON to false
	I0906 12:28:14.011825    3964 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1668,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:14.011896    3964 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:14.017236    3964 out.go:177] * [flannel-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:14.024342    3964 notify.go:220] Checking for updates...
	I0906 12:28:14.024350    3964 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:14.028206    3964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:14.031280    3964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:14.034132    3964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:14.037323    3964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:14.040197    3964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:14.041899    3964 config.go:182] Loaded profile config "enable-default-cni-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:14.041973    3964 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:14.042015    3964 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:14.046194    3964 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:14.053085    3964 start.go:298] selected driver: qemu2
	I0906 12:28:14.053090    3964 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:14.053096    3964 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:14.055036    3964 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:14.058206    3964 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:14.061242    3964 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:14.061265    3964 cni.go:84] Creating CNI manager for "flannel"
	I0906 12:28:14.061270    3964 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0906 12:28:14.061276    3964 start_flags.go:321] config:
	{Name:flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:14.065242    3964 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:14.072247    3964 out.go:177] * Starting control plane node flannel-330000 in cluster flannel-330000
	I0906 12:28:14.076256    3964 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:14.076276    3964 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:14.076293    3964 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:14.076362    3964 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:14.076367    3964 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:14.076429    3964 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/flannel-330000/config.json ...
	I0906 12:28:14.076440    3964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/flannel-330000/config.json: {Name:mk2eedf4258e21c629dfdae122c54e017863ca7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:14.076661    3964 start.go:365] acquiring machines lock for flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:14.264048    3964 start.go:369] acquired machines lock for "flannel-330000" in 187.373458ms
	I0906 12:28:14.264171    3964 start.go:93] Provisioning new machine with config: &{Name:flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:14.264382    3964 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:14.272691    3964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:14.319709    3964 start.go:159] libmachine.API.Create for "flannel-330000" (driver="qemu2")
	I0906 12:28:14.319757    3964 client.go:168] LocalClient.Create starting
	I0906 12:28:14.319888    3964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:14.319940    3964 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:14.319957    3964 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:14.320023    3964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:14.320058    3964 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:14.320085    3964 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:14.320627    3964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:14.450034    3964 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:14.521758    3964 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:14.521772    3964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:14.521953    3964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:14.531110    3964 main.go:141] libmachine: STDOUT: 
	I0906 12:28:14.531133    3964 main.go:141] libmachine: STDERR: 
	I0906 12:28:14.531213    3964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2 +20000M
	I0906 12:28:14.539204    3964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:14.539220    3964 main.go:141] libmachine: STDERR: 
	I0906 12:28:14.539237    3964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:14.539253    3964 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:14.539300    3964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:28:57:b7:12:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:14.541003    3964 main.go:141] libmachine: STDOUT: 
	I0906 12:28:14.541015    3964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:14.541036    3964 client.go:171] LocalClient.Create took 221.278125ms
	I0906 12:28:16.543103    3964 start.go:128] duration metric: createHost completed in 2.278768125s
	I0906 12:28:16.543122    3964 start.go:83] releasing machines lock for "flannel-330000", held for 2.279106583s
	W0906 12:28:16.543137    3964 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:16.557927    3964 out.go:177] * Deleting "flannel-330000" in qemu2 ...
	W0906 12:28:16.565088    3964 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:16.565095    3964 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:21.567095    3964 start.go:365] acquiring machines lock for flannel-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:21.567634    3964 start.go:369] acquired machines lock for "flannel-330000" in 451.542µs
	I0906 12:28:21.567791    3964 start.go:93] Provisioning new machine with config: &{Name:flannel-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:21.568077    3964 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:21.579632    3964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:21.625364    3964 start.go:159] libmachine.API.Create for "flannel-330000" (driver="qemu2")
	I0906 12:28:21.625409    3964 client.go:168] LocalClient.Create starting
	I0906 12:28:21.625526    3964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:21.625597    3964 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:21.625619    3964 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:21.625720    3964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:21.625758    3964 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:21.625770    3964 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:21.626350    3964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:21.755434    3964 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:21.875321    3964 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:21.875329    3964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:21.875481    3964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:21.883936    3964 main.go:141] libmachine: STDOUT: 
	I0906 12:28:21.883962    3964 main.go:141] libmachine: STDERR: 
	I0906 12:28:21.884041    3964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2 +20000M
	I0906 12:28:21.891169    3964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:21.891181    3964 main.go:141] libmachine: STDERR: 
	I0906 12:28:21.891192    3964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:21.891199    3964 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:21.891239    3964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bf:2d:e5:47:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/flannel-330000/disk.qcow2
	I0906 12:28:21.892720    3964 main.go:141] libmachine: STDOUT: 
	I0906 12:28:21.892734    3964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:21.892746    3964 client.go:171] LocalClient.Create took 267.338791ms
	I0906 12:28:23.894847    3964 start.go:128] duration metric: createHost completed in 2.326809541s
	I0906 12:28:23.894939    3964 start.go:83] releasing machines lock for "flannel-330000", held for 2.327319042s
	W0906 12:28:23.895376    3964 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:23.905996    3964 out.go:177] 
	W0906 12:28:23.908928    3964 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:23.908978    3964 out.go:239] * 
	* 
	W0906 12:28:23.911378    3964 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:23.921951    3964 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.90690025s)

                                                
                                                
-- stdout --
	* [bridge-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-330000 in cluster bridge-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:16.424369    4070 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:16.424485    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:16.424488    4070 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:16.424490    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:16.424605    4070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:16.425563    4070 out.go:303] Setting JSON to false
	I0906 12:28:16.440604    4070 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1670,"bootTime":1694026826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:16.440674    4070 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:16.445058    4070 out.go:177] * [bridge-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:16.453026    4070 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:16.453103    4070 notify.go:220] Checking for updates...
	I0906 12:28:16.455993    4070 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:16.459047    4070 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:16.462023    4070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:16.464968    4070 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:16.468031    4070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:16.471339    4070 config.go:182] Loaded profile config "flannel-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:16.471409    4070 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:16.471464    4070 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:16.475996    4070 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:16.482935    4070 start.go:298] selected driver: qemu2
	I0906 12:28:16.482940    4070 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:16.482947    4070 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:16.484890    4070 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:16.488966    4070 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:16.497002    4070 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:16.497026    4070 cni.go:84] Creating CNI manager for "bridge"
	I0906 12:28:16.497032    4070 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:28:16.497038    4070 start_flags.go:321] config:
	{Name:bridge-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:16.501180    4070 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:16.506959    4070 out.go:177] * Starting control plane node bridge-330000 in cluster bridge-330000
	I0906 12:28:16.510969    4070 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:16.510988    4070 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:16.511006    4070 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:16.511061    4070 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:16.511067    4070 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:16.511137    4070 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/bridge-330000/config.json ...
	I0906 12:28:16.511155    4070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/bridge-330000/config.json: {Name:mkf9d972bceb4ce74b1513533cbd683cbe233804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:16.511349    4070 start.go:365] acquiring machines lock for bridge-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:16.543144    4070 start.go:369] acquired machines lock for "bridge-330000" in 31.788166ms
	I0906 12:28:16.543169    4070 start.go:93] Provisioning new machine with config: &{Name:bridge-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:16.543211    4070 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:16.550060    4070 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:16.568052    4070 start.go:159] libmachine.API.Create for "bridge-330000" (driver="qemu2")
	I0906 12:28:16.568078    4070 client.go:168] LocalClient.Create starting
	I0906 12:28:16.568133    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:16.568160    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:16.568177    4070 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:16.568219    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:16.568240    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:16.568248    4070 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:16.570309    4070 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:16.684314    4070 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:17.011296    4070 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:17.011308    4070 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:17.011516    4070 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:17.020922    4070 main.go:141] libmachine: STDOUT: 
	I0906 12:28:17.020937    4070 main.go:141] libmachine: STDERR: 
	I0906 12:28:17.020990    4070 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2 +20000M
	I0906 12:28:17.028279    4070 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:17.028307    4070 main.go:141] libmachine: STDERR: 
	I0906 12:28:17.028331    4070 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:17.028339    4070 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:17.028377    4070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:af:b4:aa:c4:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:17.029932    4070 main.go:141] libmachine: STDOUT: 
	I0906 12:28:17.029944    4070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:17.029965    4070 client.go:171] LocalClient.Create took 461.893416ms
	I0906 12:28:19.032060    4070 start.go:128] duration metric: createHost completed in 2.488900041s
	I0906 12:28:19.032117    4070 start.go:83] releasing machines lock for "bridge-330000", held for 2.489027167s
	W0906 12:28:19.032209    4070 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:19.045155    4070 out.go:177] * Deleting "bridge-330000" in qemu2 ...
	W0906 12:28:19.067637    4070 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:19.067691    4070 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:24.069646    4070 start.go:365] acquiring machines lock for bridge-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:24.069726    4070 start.go:369] acquired machines lock for "bridge-330000" in 59.958µs
	I0906 12:28:24.069754    4070 start.go:93] Provisioning new machine with config: &{Name:bridge-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:24.069807    4070 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:24.075881    4070 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:24.090179    4070 start.go:159] libmachine.API.Create for "bridge-330000" (driver="qemu2")
	I0906 12:28:24.090205    4070 client.go:168] LocalClient.Create starting
	I0906 12:28:24.090268    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:24.090292    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:24.090305    4070 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:24.090344    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:24.090358    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:24.090365    4070 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:24.090629    4070 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:24.209896    4070 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:24.237863    4070 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:24.237887    4070 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:24.238204    4070 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:24.247035    4070 main.go:141] libmachine: STDOUT: 
	I0906 12:28:24.247054    4070 main.go:141] libmachine: STDERR: 
	I0906 12:28:24.247115    4070 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2 +20000M
	I0906 12:28:24.255404    4070 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:24.255423    4070 main.go:141] libmachine: STDERR: 
	I0906 12:28:24.255442    4070 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:24.255449    4070 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:24.255495    4070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:76:69:7e:52:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/bridge-330000/disk.qcow2
	I0906 12:28:24.257227    4070 main.go:141] libmachine: STDOUT: 
	I0906 12:28:24.257241    4070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:24.257256    4070 client.go:171] LocalClient.Create took 167.051542ms
	I0906 12:28:26.259244    4070 start.go:128] duration metric: createHost completed in 2.189489958s
	I0906 12:28:26.259260    4070 start.go:83] releasing machines lock for "bridge-330000", held for 2.189588042s
	W0906 12:28:26.259342    4070 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:26.268155    4070 out.go:177] 
	W0906 12:28:26.281167    4070 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:26.281173    4070 out.go:239] * 
	* 
	W0906 12:28:26.281682    4070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:26.291998    4070 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-330000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.734816791s)

                                                
                                                
-- stdout --
	* [kubenet-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-330000 in cluster kubenet-330000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:26.245116    4192 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:26.245241    4192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:26.245245    4192 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:26.245247    4192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:26.245357    4192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:26.246443    4192 out.go:303] Setting JSON to false
	I0906 12:28:26.261535    4192 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1680,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:26.261603    4192 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:26.272181    4192 out.go:177] * [kubenet-330000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:26.291999    4192 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:26.284256    4192 notify.go:220] Checking for updates...
	I0906 12:28:26.308155    4192 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:26.312174    4192 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:26.317275    4192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:26.324122    4192 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:26.327134    4192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:26.331464    4192 config.go:182] Loaded profile config "bridge-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:26.331535    4192 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:26.331582    4192 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:26.334136    4192 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:26.342155    4192 start.go:298] selected driver: qemu2
	I0906 12:28:26.342165    4192 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:26.342171    4192 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:26.344190    4192 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:26.347978    4192 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:26.352200    4192 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:26.352220    4192 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0906 12:28:26.352228    4192 start_flags.go:321] config:
	{Name:kubenet-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:26.356745    4192 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:26.361172    4192 out.go:177] * Starting control plane node kubenet-330000 in cluster kubenet-330000
	I0906 12:28:26.369130    4192 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:26.369170    4192 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:26.369183    4192 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:26.369265    4192 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:26.369270    4192 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:26.369333    4192 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kubenet-330000/config.json ...
	I0906 12:28:26.369344    4192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/kubenet-330000/config.json: {Name:mkda32dd9279f8ba17afb050e5635ae6af5ea4e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:26.369576    4192 start.go:365] acquiring machines lock for kubenet-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:26.369602    4192 start.go:369] acquired machines lock for "kubenet-330000" in 21.125µs
	I0906 12:28:26.369612    4192 start.go:93] Provisioning new machine with config: &{Name:kubenet-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:26.369644    4192 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:26.374140    4192 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:26.388429    4192 start.go:159] libmachine.API.Create for "kubenet-330000" (driver="qemu2")
	I0906 12:28:26.388456    4192 client.go:168] LocalClient.Create starting
	I0906 12:28:26.388534    4192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:26.388561    4192 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:26.388569    4192 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:26.388610    4192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:26.388628    4192 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:26.388636    4192 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:26.388930    4192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:26.509041    4192 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:26.587193    4192 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:26.587203    4192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:26.587389    4192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:26.596407    4192 main.go:141] libmachine: STDOUT: 
	I0906 12:28:26.596429    4192 main.go:141] libmachine: STDERR: 
	I0906 12:28:26.596503    4192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2 +20000M
	I0906 12:28:26.604850    4192 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:26.604869    4192 main.go:141] libmachine: STDERR: 
	I0906 12:28:26.604892    4192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:26.604900    4192 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:26.604942    4192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3e:a7:86:d2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:26.606734    4192 main.go:141] libmachine: STDOUT: 
	I0906 12:28:26.606749    4192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:26.606770    4192 client.go:171] LocalClient.Create took 218.314042ms
	I0906 12:28:28.608902    4192 start.go:128] duration metric: createHost completed in 2.239294667s
	I0906 12:28:28.608961    4192 start.go:83] releasing machines lock for "kubenet-330000", held for 2.239409459s
	W0906 12:28:28.609008    4192 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:28.625912    4192 out.go:177] * Deleting "kubenet-330000" in qemu2 ...
	W0906 12:28:28.641471    4192 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:28.641492    4192 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:33.643611    4192 start.go:365] acquiring machines lock for kubenet-330000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:33.644199    4192 start.go:369] acquired machines lock for "kubenet-330000" in 409.167µs
	I0906 12:28:33.644424    4192 start.go:93] Provisioning new machine with config: &{Name:kubenet-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:33.644733    4192 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:33.654122    4192 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:28:33.703272    4192 start.go:159] libmachine.API.Create for "kubenet-330000" (driver="qemu2")
	I0906 12:28:33.703310    4192 client.go:168] LocalClient.Create starting
	I0906 12:28:33.703429    4192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:33.703479    4192 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:33.703498    4192 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:33.703573    4192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:33.703608    4192 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:33.703619    4192 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:33.704157    4192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:33.833848    4192 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:33.893199    4192 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:33.893207    4192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:33.893397    4192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:33.901997    4192 main.go:141] libmachine: STDOUT: 
	I0906 12:28:33.902013    4192 main.go:141] libmachine: STDERR: 
	I0906 12:28:33.902064    4192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2 +20000M
	I0906 12:28:33.909135    4192 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:33.909148    4192 main.go:141] libmachine: STDERR: 
	I0906 12:28:33.909162    4192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:33.909171    4192 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:33.909216    4192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fc:40:8f:31:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/kubenet-330000/disk.qcow2
	I0906 12:28:33.910724    4192 main.go:141] libmachine: STDOUT: 
	I0906 12:28:33.910736    4192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:33.910748    4192 client.go:171] LocalClient.Create took 207.438792ms
	I0906 12:28:35.912952    4192 start.go:128] duration metric: createHost completed in 2.268254166s
	I0906 12:28:35.913004    4192 start.go:83] releasing machines lock for "kubenet-330000", held for 2.268808542s
	W0906 12:28:35.913352    4192 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:35.921931    4192 out.go:177] 
	W0906 12:28:35.927108    4192 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:35.927132    4192 out.go:239] * 
	* 
	W0906 12:28:35.929668    4192 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:35.938986    4192 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.948181083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-694000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-694000 in cluster old-k8s-version-694000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-694000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:28.383836    4298 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:28.383939    4298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:28.383942    4298 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:28.383944    4298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:28.384057    4298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:28.385053    4298 out.go:303] Setting JSON to false
	I0906 12:28:28.400397    4298 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1682,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:28.400469    4298 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:28.405819    4298 out.go:177] * [old-k8s-version-694000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:28.409720    4298 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:28.409758    4298 notify.go:220] Checking for updates...
	I0906 12:28:28.412776    4298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:28.416733    4298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:28.419770    4298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:28.421074    4298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:28.423715    4298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:28.427036    4298 config.go:182] Loaded profile config "kubenet-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:28.427102    4298 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:28.427153    4298 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:28.431550    4298 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:28.438701    4298 start.go:298] selected driver: qemu2
	I0906 12:28:28.438711    4298 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:28.438718    4298 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:28.440732    4298 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:28.443771    4298 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:28.446857    4298 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:28.446887    4298 cni.go:84] Creating CNI manager for ""
	I0906 12:28:28.446899    4298 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:28:28.446912    4298 start_flags.go:321] config:
	{Name:old-k8s-version-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:28.451652    4298 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:28.457605    4298 out.go:177] * Starting control plane node old-k8s-version-694000 in cluster old-k8s-version-694000
	I0906 12:28:28.461670    4298 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:28:28.461707    4298 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:28:28.461724    4298 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:28.461788    4298 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:28.461800    4298 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 12:28:28.461870    4298 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/old-k8s-version-694000/config.json ...
	I0906 12:28:28.461883    4298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/old-k8s-version-694000/config.json: {Name:mkd814724cd215165ebef81b39eacde8e29a2090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:28.462114    4298 start.go:365] acquiring machines lock for old-k8s-version-694000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:28.609060    4298 start.go:369] acquired machines lock for "old-k8s-version-694000" in 146.934459ms
	I0906 12:28:28.609162    4298 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:28.609463    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:28.613929    4298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:28.655988    4298 start.go:159] libmachine.API.Create for "old-k8s-version-694000" (driver="qemu2")
	I0906 12:28:28.656034    4298 client.go:168] LocalClient.Create starting
	I0906 12:28:28.656151    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:28.656194    4298 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:28.656218    4298 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:28.656282    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:28.656316    4298 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:28.656330    4298 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:28.656879    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:28.781593    4298 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:28.930029    4298 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:28.930036    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:28.930200    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:28.939085    4298 main.go:141] libmachine: STDOUT: 
	I0906 12:28:28.939111    4298 main.go:141] libmachine: STDERR: 
	I0906 12:28:28.939166    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2 +20000M
	I0906 12:28:28.946485    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:28.946498    4298 main.go:141] libmachine: STDERR: 
	I0906 12:28:28.946517    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:28.946525    4298 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:28.946560    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:cd:00:f0:7d:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:28.948030    4298 main.go:141] libmachine: STDOUT: 
	I0906 12:28:28.948043    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:28.948062    4298 client.go:171] LocalClient.Create took 292.026875ms
	I0906 12:28:30.950179    4298 start.go:128] duration metric: createHost completed in 2.340757916s
	I0906 12:28:30.950262    4298 start.go:83] releasing machines lock for "old-k8s-version-694000", held for 2.341194958s
	W0906 12:28:30.950324    4298 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:30.961543    4298 out.go:177] * Deleting "old-k8s-version-694000" in qemu2 ...
	W0906 12:28:30.983128    4298 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:30.983156    4298 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:35.985163    4298 start.go:365] acquiring machines lock for old-k8s-version-694000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:35.985295    4298 start.go:369] acquired machines lock for "old-k8s-version-694000" in 92.417µs
	I0906 12:28:35.985354    4298 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:35.985465    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:35.993961    4298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:36.017978    4298 start.go:159] libmachine.API.Create for "old-k8s-version-694000" (driver="qemu2")
	I0906 12:28:36.018012    4298 client.go:168] LocalClient.Create starting
	I0906 12:28:36.018140    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:36.018168    4298 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:36.018181    4298 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:36.018229    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:36.018248    4298 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:36.018259    4298 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:36.018588    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:36.140295    4298 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:36.242386    4298 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:36.242400    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:36.242598    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:36.251959    4298 main.go:141] libmachine: STDOUT: 
	I0906 12:28:36.251989    4298 main.go:141] libmachine: STDERR: 
	I0906 12:28:36.252067    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2 +20000M
	I0906 12:28:36.260091    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:36.260108    4298 main.go:141] libmachine: STDERR: 
	I0906 12:28:36.260124    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:36.260138    4298 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:36.260186    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:b3:f5:3b:2c:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:36.261895    4298 main.go:141] libmachine: STDOUT: 
	I0906 12:28:36.261906    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:36.261919    4298 client.go:171] LocalClient.Create took 243.909542ms
	I0906 12:28:38.263322    4298 start.go:128] duration metric: createHost completed in 2.277908708s
	I0906 12:28:38.263344    4298 start.go:83] releasing machines lock for "old-k8s-version-694000", held for 2.27810175s
	W0906 12:28:38.263444    4298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:38.272690    4298 out.go:177] 
	W0906 12:28:38.283781    4298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:38.283790    4298 out.go:239] * 
	* 
	W0906 12:28:38.284288    4298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:38.295664    4298 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (34.793125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.856027334s)

                                                
                                                
-- stdout --
	* [no-preload-516000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-516000 in cluster no-preload-516000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:38.078380    4414 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:38.078494    4414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:38.078497    4414 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:38.078503    4414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:38.078607    4414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:38.079632    4414 out.go:303] Setting JSON to false
	I0906 12:28:38.094636    4414 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1692,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:38.094696    4414 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:38.099835    4414 out.go:177] * [no-preload-516000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:38.103669    4414 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:38.107713    4414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:38.103742    4414 notify.go:220] Checking for updates...
	I0906 12:28:38.112771    4414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:38.115678    4414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:38.118734    4414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:38.121779    4414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:38.125082    4414 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:38.125158    4414 config.go:182] Loaded profile config "old-k8s-version-694000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 12:28:38.125197    4414 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:38.129752    4414 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:38.136732    4414 start.go:298] selected driver: qemu2
	I0906 12:28:38.136740    4414 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:38.136746    4414 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:38.138713    4414 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:38.141735    4414 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:38.144842    4414 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:38.144862    4414 cni.go:84] Creating CNI manager for ""
	I0906 12:28:38.144868    4414 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:38.144871    4414 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:28:38.144878    4414 start_flags.go:321] config:
	{Name:no-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-516000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:38.149117    4414 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.156763    4414 out.go:177] * Starting control plane node no-preload-516000 in cluster no-preload-516000
	I0906 12:28:38.160772    4414 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:38.160840    4414 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/no-preload-516000/config.json ...
	I0906 12:28:38.160853    4414 cache.go:107] acquiring lock: {Name:mkb5bfb95e12e7b110ffa3b5337b65056a9d05bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.160864    4414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/no-preload-516000/config.json: {Name:mk3e921d7bf5dcde5ec494c2b412b0528856fb5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:38.160857    4414 cache.go:107] acquiring lock: {Name:mk614819d8e677c0d43908025d8bf7b81dec2d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.160874    4414 cache.go:107] acquiring lock: {Name:mkac12efbe4b49755dd310cd4a2b70ca37e2a116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.160886    4414 cache.go:107] acquiring lock: {Name:mk72cf879c699b61770cff2e43d3225f0c03109e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.161048    4414 cache.go:107] acquiring lock: {Name:mk61310604d837a2e71a8e6d121a25ec4a38d20f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.161082    4414 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0906 12:28:38.161100    4414 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0906 12:28:38.161104    4414 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:28:38.161123    4414 cache.go:107] acquiring lock: {Name:mk7bf497782e16346607c4d7b17c59ca2f5d6174 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.161115    4414 cache.go:107] acquiring lock: {Name:mkfa0db9eae71f56f1f9cb374660ce1cd258de6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.161097    4414 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0906 12:28:38.161145    4414 cache.go:107] acquiring lock: {Name:mk7880c43623d0f1c5b2f2c5f167495557261c5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.161114    4414 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 260.5µs
	I0906 12:28:38.161190    4414 start.go:365] acquiring machines lock for no-preload-516000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:38.161195    4414 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0906 12:28:38.161211    4414 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0906 12:28:38.161196    4414 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:28:38.161284    4414 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0906 12:28:38.161311    4414 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0906 12:28:38.169043    4414 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0906 12:28:38.169160    4414 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0906 12:28:38.169951    4414 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0906 12:28:38.170081    4414 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0906 12:28:38.170145    4414 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0906 12:28:38.170452    4414 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0906 12:28:38.170567    4414 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0906 12:28:38.263506    4414 start.go:369] acquired machines lock for "no-preload-516000" in 102.305125ms
	I0906 12:28:38.263576    4414 start.go:93] Provisioning new machine with config: &{Name:no-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-516000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:38.263652    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:38.279722    4414 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:38.293601    4414 start.go:159] libmachine.API.Create for "no-preload-516000" (driver="qemu2")
	I0906 12:28:38.293627    4414 client.go:168] LocalClient.Create starting
	I0906 12:28:38.293693    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:38.293717    4414 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:38.293729    4414 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:38.293768    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:38.293786    4414 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:38.293793    4414 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:38.300060    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:38.431861    4414 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:38.507345    4414 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:38.507378    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:38.507566    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:38.516543    4414 main.go:141] libmachine: STDOUT: 
	I0906 12:28:38.516562    4414 main.go:141] libmachine: STDERR: 
	I0906 12:28:38.516642    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2 +20000M
	I0906 12:28:38.525048    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:38.525065    4414 main.go:141] libmachine: STDERR: 
	I0906 12:28:38.525086    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:38.525093    4414 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:38.525137    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:a4:2d:a1:d8:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:38.527404    4414 main.go:141] libmachine: STDOUT: 
	I0906 12:28:38.527419    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:38.527443    4414 client.go:171] LocalClient.Create took 233.816041ms
	I0906 12:28:38.750114    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0906 12:28:38.806894    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0906 12:28:38.981198    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0906 12:28:38.981218    4414 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 820.12925ms
	I0906 12:28:38.981234    4414 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0906 12:28:38.988157    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0906 12:28:39.247351    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0906 12:28:39.409886    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0906 12:28:39.622269    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0906 12:28:39.819094    4414 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0906 12:28:40.528196    4414 start.go:128] duration metric: createHost completed in 2.2645775s
	I0906 12:28:40.528241    4414 start.go:83] releasing machines lock for "no-preload-516000", held for 2.264766458s
	W0906 12:28:40.528317    4414 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:40.543871    4414 out.go:177] * Deleting "no-preload-516000" in qemu2 ...
	W0906 12:28:40.566396    4414 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:40.566425    4414 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:41.697317    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0906 12:28:41.697364    4414 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.53640425s
	I0906 12:28:41.697390    4414 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0906 12:28:41.802868    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0906 12:28:41.802921    4414 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 3.642139s
	I0906 12:28:41.802951    4414 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0906 12:28:43.023388    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0906 12:28:43.023432    4414 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 4.862702833s
	I0906 12:28:43.023459    4414 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0906 12:28:43.820878    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0906 12:28:43.820925    4414 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 5.659978792s
	I0906 12:28:43.820952    4414 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0906 12:28:43.829803    4414 cache.go:157] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0906 12:28:43.829856    4414 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 5.669137292s
	I0906 12:28:43.829886    4414 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0906 12:28:45.575028    4414 start.go:365] acquiring machines lock for no-preload-516000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:45.584574    4414 start.go:369] acquired machines lock for "no-preload-516000" in 9.494333ms
	I0906 12:28:45.584614    4414 start.go:93] Provisioning new machine with config: &{Name:no-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-516000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:45.584817    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:45.591732    4414 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:45.635485    4414 start.go:159] libmachine.API.Create for "no-preload-516000" (driver="qemu2")
	I0906 12:28:45.635525    4414 client.go:168] LocalClient.Create starting
	I0906 12:28:45.635655    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:45.635716    4414 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:45.635739    4414 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:45.635806    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:45.635842    4414 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:45.635861    4414 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:45.636332    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:45.767003    4414 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:45.847177    4414 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:45.847184    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:45.847330    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:45.856408    4414 main.go:141] libmachine: STDOUT: 
	I0906 12:28:45.856427    4414 main.go:141] libmachine: STDERR: 
	I0906 12:28:45.856501    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2 +20000M
	I0906 12:28:45.864930    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:45.864961    4414 main.go:141] libmachine: STDERR: 
	I0906 12:28:45.864974    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:45.864985    4414 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:45.865041    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:02:d3:46:7b:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:45.866820    4414 main.go:141] libmachine: STDOUT: 
	I0906 12:28:45.866837    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:45.866848    4414 client.go:171] LocalClient.Create took 231.325042ms
	I0906 12:28:47.867024    4414 start.go:128] duration metric: createHost completed in 2.2822075s
	I0906 12:28:47.867087    4414 start.go:83] releasing machines lock for "no-preload-516000", held for 2.282553s
	W0906 12:28:47.867366    4414 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:47.881770    4414 out.go:177] 
	W0906 12:28:47.886888    4414 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:47.886914    4414 out.go:239] * 
	* 
	W0906 12:28:47.888865    4414 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:47.897808    4414 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (48.15075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-694000 create -f testdata/busybox.yaml: exit status 1 (29.10225ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-694000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (35.179459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (33.43075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-694000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-694000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-694000 describe deploy/metrics-server -n kube-system: exit status 1 (28.919959ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-694000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-694000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (28.445042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.909300458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-694000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-694000 in cluster old-k8s-version-694000
	* Restarting existing qemu2 VM for "old-k8s-version-694000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-694000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:38.737233    4477 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:38.737364    4477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:38.737367    4477 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:38.737370    4477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:38.737480    4477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:38.738512    4477 out.go:303] Setting JSON to false
	I0906 12:28:38.754152    4477 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1692,"bootTime":1694026826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:38.754230    4477 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:38.757621    4477 out.go:177] * [old-k8s-version-694000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:38.764673    4477 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:38.764744    4477 notify.go:220] Checking for updates...
	I0906 12:28:38.768604    4477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:38.771726    4477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:38.774701    4477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:38.777685    4477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:38.780691    4477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:38.783940    4477 config.go:182] Loaded profile config "old-k8s-version-694000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 12:28:38.786570    4477 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0906 12:28:38.789669    4477 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:38.793613    4477 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:28:38.800640    4477 start.go:298] selected driver: qemu2
	I0906 12:28:38.800648    4477 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:38.800705    4477 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:38.802757    4477 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:38.802787    4477 cni.go:84] Creating CNI manager for ""
	I0906 12:28:38.802793    4477 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:28:38.802798    4477 start_flags.go:321] config:
	{Name:old-k8s-version-694000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:38.806517    4477 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:38.814647    4477 out.go:177] * Starting control plane node old-k8s-version-694000 in cluster old-k8s-version-694000
	I0906 12:28:38.816022    4477 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:28:38.816043    4477 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:28:38.816054    4477 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:38.816119    4477 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:38.816125    4477 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 12:28:38.816198    4477 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/old-k8s-version-694000/config.json ...
	I0906 12:28:38.816497    4477 start.go:365] acquiring machines lock for old-k8s-version-694000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:40.528412    4477 start.go:369] acquired machines lock for "old-k8s-version-694000" in 1.71193475s
	I0906 12:28:40.528502    4477 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:28:40.528534    4477 fix.go:54] fixHost starting: 
	I0906 12:28:40.529359    4477 fix.go:102] recreateIfNeeded on old-k8s-version-694000: state=Stopped err=<nil>
	W0906 12:28:40.529402    4477 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:28:40.534957    4477 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-694000" ...
	I0906 12:28:40.548005    4477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:b3:f5:3b:2c:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:40.559029    4477 main.go:141] libmachine: STDOUT: 
	I0906 12:28:40.559118    4477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:40.559261    4477 fix.go:56] fixHost completed within 30.732083ms
	I0906 12:28:40.559282    4477 start.go:83] releasing machines lock for "old-k8s-version-694000", held for 30.840708ms
	W0906 12:28:40.559318    4477 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:40.559653    4477 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:40.559683    4477 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:45.560765    4477 start.go:365] acquiring machines lock for old-k8s-version-694000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:45.561221    4477 start.go:369] acquired machines lock for "old-k8s-version-694000" in 321.416µs
	I0906 12:28:45.561371    4477 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:28:45.561391    4477 fix.go:54] fixHost starting: 
	I0906 12:28:45.562142    4477 fix.go:102] recreateIfNeeded on old-k8s-version-694000: state=Stopped err=<nil>
	W0906 12:28:45.562169    4477 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:28:45.567776    4477 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-694000" ...
	I0906 12:28:45.574898    4477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:b3:f5:3b:2c:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/old-k8s-version-694000/disk.qcow2
	I0906 12:28:45.584338    4477 main.go:141] libmachine: STDOUT: 
	I0906 12:28:45.584399    4477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:45.584489    4477 fix.go:56] fixHost completed within 23.099833ms
	I0906 12:28:45.584509    4477 start.go:83] releasing machines lock for "old-k8s-version-694000", held for 23.266834ms
	W0906 12:28:45.584701    4477 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-694000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-694000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:45.595721    4477 out.go:177] 
	W0906 12:28:45.598882    4477 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:45.598908    4477 out.go:239] * 
	* 
	W0906 12:28:45.601087    4477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:45.609752    4477 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-694000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (48.990542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-694000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (34.504541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-694000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-694000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-694000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.143958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-694000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-694000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (32.475917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-694000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-694000 "sudo crictl images -o json": exit status 89 (50.494666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-694000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-694000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-694000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (28.647375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-694000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-694000 --alsologtostderr -v=1: exit status 89 (39.865583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-694000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:45.874287    4563 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:45.874584    4563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:45.874587    4563 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:45.874590    4563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:45.874696    4563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:45.874870    4563 out.go:303] Setting JSON to false
	I0906 12:28:45.874879    4563 mustload.go:65] Loading cluster: old-k8s-version-694000
	I0906 12:28:45.875044    4563 config.go:182] Loaded profile config "old-k8s-version-694000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 12:28:45.879753    4563 out.go:177] * The control plane node must be running for this command
	I0906 12:28:45.882799    4563 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-694000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-694000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (27.646167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (28.364416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-694000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.5168865s)

                                                
                                                
-- stdout --
	* [embed-certs-293000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-293000 in cluster embed-certs-293000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-293000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:46.335181    4588 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:46.335312    4588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:46.335315    4588 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:46.335318    4588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:46.335425    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:46.336459    4588 out.go:303] Setting JSON to false
	I0906 12:28:46.351747    4588 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1700,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:46.351809    4588 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:46.356353    4588 out.go:177] * [embed-certs-293000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:46.366287    4588 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:46.362386    4588 notify.go:220] Checking for updates...
	I0906 12:28:46.372322    4588 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:46.380281    4588 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:46.388336    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:46.391238    4588 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:46.394273    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:46.398104    4588 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:46.398204    4588 config.go:182] Loaded profile config "no-preload-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:46.398263    4588 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:46.401260    4588 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:46.408272    4588 start.go:298] selected driver: qemu2
	I0906 12:28:46.408276    4588 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:46.408282    4588 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:46.410377    4588 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:46.414323    4588 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:46.417381    4588 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:46.417403    4588 cni.go:84] Creating CNI manager for ""
	I0906 12:28:46.417411    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:46.417415    4588 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:28:46.417419    4588 start_flags.go:321] config:
	{Name:embed-certs-293000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-293000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:46.421636    4588 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:46.430269    4588 out.go:177] * Starting control plane node embed-certs-293000 in cluster embed-certs-293000
	I0906 12:28:46.434282    4588 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:46.434308    4588 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:46.434335    4588 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:46.434404    4588 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:46.434414    4588 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:46.434487    4588 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/embed-certs-293000/config.json ...
	I0906 12:28:46.434501    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/embed-certs-293000/config.json: {Name:mk8f75e4ff03d702a5c756e9847220cee9d874e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:46.434694    4588 start.go:365] acquiring machines lock for embed-certs-293000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:47.867235    4588 start.go:369] acquired machines lock for "embed-certs-293000" in 1.432523s
	I0906 12:28:47.867410    4588 start.go:93] Provisioning new machine with config: &{Name:embed-certs-293000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-293000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:47.867636    4588 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:47.877775    4588 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:47.923620    4588 start.go:159] libmachine.API.Create for "embed-certs-293000" (driver="qemu2")
	I0906 12:28:47.923665    4588 client.go:168] LocalClient.Create starting
	I0906 12:28:47.923793    4588 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:47.923839    4588 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:47.923866    4588 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:47.923928    4588 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:47.923963    4588 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:47.923978    4588 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:47.924554    4588 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:48.055293    4588 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:48.188549    4588 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:48.188561    4588 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:48.188734    4588 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:48.197598    4588 main.go:141] libmachine: STDOUT: 
	I0906 12:28:48.197615    4588 main.go:141] libmachine: STDERR: 
	I0906 12:28:48.197674    4588 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2 +20000M
	I0906 12:28:48.210504    4588 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:48.210524    4588 main.go:141] libmachine: STDERR: 
	I0906 12:28:48.210551    4588 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:48.210563    4588 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:48.210609    4588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ae:fc:04:ca:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:48.212215    4588 main.go:141] libmachine: STDOUT: 
	I0906 12:28:48.212229    4588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:48.212248    4588 client.go:171] LocalClient.Create took 288.583834ms
	I0906 12:28:50.214466    4588 start.go:128] duration metric: createHost completed in 2.346858542s
	I0906 12:28:50.214537    4588 start.go:83] releasing machines lock for "embed-certs-293000", held for 2.347306083s
	W0906 12:28:50.214599    4588 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:50.233097    4588 out.go:177] * Deleting "embed-certs-293000" in qemu2 ...
	W0906 12:28:50.257541    4588 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:50.257572    4588 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:55.259598    4588 start.go:365] acquiring machines lock for embed-certs-293000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:55.274406    4588 start.go:369] acquired machines lock for "embed-certs-293000" in 14.718458ms
	I0906 12:28:55.274461    4588 start.go:93] Provisioning new machine with config: &{Name:embed-certs-293000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-293000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:55.274727    4588 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:55.283172    4588 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:55.328725    4588 start.go:159] libmachine.API.Create for "embed-certs-293000" (driver="qemu2")
	I0906 12:28:55.328765    4588 client.go:168] LocalClient.Create starting
	I0906 12:28:55.328894    4588 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:55.328948    4588 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:55.328970    4588 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:55.329035    4588 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:55.329071    4588 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:55.329091    4588 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:55.329567    4588 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:55.489156    4588 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:55.763834    4588 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:55.763847    4588 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:55.764017    4588 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:55.773442    4588 main.go:141] libmachine: STDOUT: 
	I0906 12:28:55.773461    4588 main.go:141] libmachine: STDERR: 
	I0906 12:28:55.773543    4588 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2 +20000M
	I0906 12:28:55.781573    4588 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:55.781591    4588 main.go:141] libmachine: STDERR: 
	I0906 12:28:55.781641    4588 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:55.781657    4588 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:55.781705    4588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3e:be:86:83:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:28:55.783559    4588 main.go:141] libmachine: STDOUT: 
	I0906 12:28:55.783574    4588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:55.783586    4588 client.go:171] LocalClient.Create took 454.828667ms
	I0906 12:28:57.785727    4588 start.go:128] duration metric: createHost completed in 2.511037s
	I0906 12:28:57.785806    4588 start.go:83] releasing machines lock for "embed-certs-293000", held for 2.511435875s
	W0906 12:28:57.786193    4588 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:57.799902    4588 out.go:177] 
	W0906 12:28:57.803034    4588 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:57.803064    4588 out.go:239] * 
	* 
	W0906 12:28:57.805369    4588 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:57.815878    4588 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (50.547083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-516000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-516000 create -f testdata/busybox.yaml: exit status 1 (29.312917ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-516000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (33.288459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (33.908417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-516000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-516000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-516000 describe deploy/metrics-server -n kube-system: exit status 1 (27.476542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-516000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-516000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (29.250875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.985454625s)

                                                
                                                
-- stdout --
	* [no-preload-516000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-516000 in cluster no-preload-516000
	* Restarting existing qemu2 VM for "no-preload-516000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-516000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:48.354500    4616 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:48.354612    4616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:48.354614    4616 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:48.354617    4616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:48.354726    4616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:48.355640    4616 out.go:303] Setting JSON to false
	I0906 12:28:48.370617    4616 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1702,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:48.370688    4616 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:48.374444    4616 out.go:177] * [no-preload-516000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:48.381353    4616 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:48.385413    4616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:48.381396    4616 notify.go:220] Checking for updates...
	I0906 12:28:48.392386    4616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:48.395441    4616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:48.398389    4616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:48.401413    4616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:48.404760    4616 config.go:182] Loaded profile config "no-preload-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:48.404997    4616 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:48.409494    4616 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:28:48.416378    4616 start.go:298] selected driver: qemu2
	I0906 12:28:48.416384    4616 start.go:902] validating driver "qemu2" against &{Name:no-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-516000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:48.416438    4616 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:48.418503    4616 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:48.418529    4616 cni.go:84] Creating CNI manager for ""
	I0906 12:28:48.418535    4616 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:48.418540    4616 start_flags.go:321] config:
	{Name:no-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-516000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:48.422495    4616 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.429416    4616 out.go:177] * Starting control plane node no-preload-516000 in cluster no-preload-516000
	I0906 12:28:48.433399    4616 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:48.433466    4616 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/no-preload-516000/config.json ...
	I0906 12:28:48.433485    4616 cache.go:107] acquiring lock: {Name:mkb5bfb95e12e7b110ffa3b5337b65056a9d05bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433507    4616 cache.go:107] acquiring lock: {Name:mk614819d8e677c0d43908025d8bf7b81dec2d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433505    4616 cache.go:107] acquiring lock: {Name:mk72cf879c699b61770cff2e43d3225f0c03109e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433531    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:28:48.433536    4616 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 53.125µs
	I0906 12:28:48.433541    4616 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:28:48.433546    4616 cache.go:107] acquiring lock: {Name:mkfa0db9eae71f56f1f9cb374660ce1cd258de6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433568    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0906 12:28:48.433576    4616 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 79.375µs
	I0906 12:28:48.433580    4616 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0906 12:28:48.433578    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0906 12:28:48.433583    4616 cache.go:107] acquiring lock: {Name:mkac12efbe4b49755dd310cd4a2b70ca37e2a116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433595    4616 cache.go:107] acquiring lock: {Name:mk7bf497782e16346607c4d7b17c59ca2f5d6174 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433621    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0906 12:28:48.433631    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0906 12:28:48.433582    4616 cache.go:107] acquiring lock: {Name:mk7880c43623d0f1c5b2f2c5f167495557261c5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433634    4616 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 40.25µs
	I0906 12:28:48.433839    4616 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0906 12:28:48.433625    4616 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 43.334µs
	I0906 12:28:48.433861    4616 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0906 12:28:48.433864    4616 cache.go:107] acquiring lock: {Name:mk61310604d837a2e71a8e6d121a25ec4a38d20f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:48.433589    4616 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 40.042µs
	I0906 12:28:48.433960    4616 start.go:365] acquiring machines lock for no-preload-516000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:48.433575    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0906 12:28:48.433995    4616 cache.go:115] /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0906 12:28:48.434003    4616 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 421.125µs
	I0906 12:28:48.434004    4616 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 510.834µs
	I0906 12:28:48.434013    4616 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0906 12:28:48.434015    4616 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0906 12:28:48.433919    4616 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0906 12:28:48.434080    4616 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0906 12:28:48.438248    4616 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0906 12:28:49.019038    4616 cache.go:162] opening:  /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0906 12:28:50.214679    4616 start.go:369] acquired machines lock for "no-preload-516000" in 1.780745458s
	I0906 12:28:50.214875    4616 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:28:50.214895    4616 fix.go:54] fixHost starting: 
	I0906 12:28:50.215569    4616 fix.go:102] recreateIfNeeded on no-preload-516000: state=Stopped err=<nil>
	W0906 12:28:50.215602    4616 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:28:50.225094    4616 out.go:177] * Restarting existing qemu2 VM for "no-preload-516000" ...
	I0906 12:28:50.237138    4616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:02:d3:46:7b:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:50.248149    4616 main.go:141] libmachine: STDOUT: 
	I0906 12:28:50.248200    4616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:50.248484    4616 fix.go:56] fixHost completed within 33.58275ms
	I0906 12:28:50.248504    4616 start.go:83] releasing machines lock for "no-preload-516000", held for 33.788083ms
	W0906 12:28:50.248541    4616 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:50.248690    4616 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:50.248709    4616 start.go:687] Will try again in 5 seconds ...
	I0906 12:28:55.249102    4616 start.go:365] acquiring machines lock for no-preload-516000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:55.249650    4616 start.go:369] acquired machines lock for "no-preload-516000" in 429.292µs
	I0906 12:28:55.249831    4616 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:28:55.249856    4616 fix.go:54] fixHost starting: 
	I0906 12:28:55.250721    4616 fix.go:102] recreateIfNeeded on no-preload-516000: state=Stopped err=<nil>
	W0906 12:28:55.250748    4616 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:28:55.256368    4616 out.go:177] * Restarting existing qemu2 VM for "no-preload-516000" ...
	I0906 12:28:55.264438    4616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:02:d3:46:7b:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/no-preload-516000/disk.qcow2
	I0906 12:28:55.274148    4616 main.go:141] libmachine: STDOUT: 
	I0906 12:28:55.274198    4616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:55.274287    4616 fix.go:56] fixHost completed within 24.431417ms
	I0906 12:28:55.274326    4616 start.go:83] releasing machines lock for "no-preload-516000", held for 24.623ms
	W0906 12:28:55.274497    4616 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-516000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-516000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:28:55.286209    4616 out.go:177] 
	W0906 12:28:55.290318    4616 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:28:55.290349    4616 out.go:239] * 
	* 
	W0906 12:28:55.292982    4616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:28:55.302210    4616 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (48.513208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-516000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (34.030375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-516000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-516000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-516000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.116083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-516000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-516000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (31.387583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-516000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-516000 "sudo crictl images -o json": exit status 89 (50.895208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-516000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-516000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-516000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (29.743291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-516000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-516000 --alsologtostderr -v=1: exit status 89 (41.660333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-516000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:55.567285    4650 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:55.567408    4650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:55.567411    4650 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:55.567413    4650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:55.567532    4650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:55.567770    4650 out.go:303] Setting JSON to false
	I0906 12:28:55.567779    4650 mustload.go:65] Loading cluster: no-preload-516000
	I0906 12:28:55.567941    4650 config.go:182] Loaded profile config "no-preload-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:55.572140    4650 out.go:177] * The control plane node must be running for this command
	I0906 12:28:55.576318    4650 out.go:177]   To start a cluster, run: "minikube start -p no-preload-516000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-516000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (28.98ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (29.033625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.322763042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-649000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-649000 in cluster default-k8s-diff-port-649000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-649000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:56.274040    4688 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:56.274186    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:56.274189    4688 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:56.274191    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:56.274292    4688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:56.275318    4688 out.go:303] Setting JSON to false
	I0906 12:28:56.290409    4688 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1710,"bootTime":1694026826,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:56.290469    4688 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:56.294817    4688 out.go:177] * [default-k8s-diff-port-649000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:56.301823    4688 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:56.301904    4688 notify.go:220] Checking for updates...
	I0906 12:28:56.309707    4688 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:56.313709    4688 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:56.316665    4688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:56.320669    4688 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:56.323794    4688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:56.327042    4688 config.go:182] Loaded profile config "embed-certs-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:56.327104    4688 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:56.327139    4688 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:56.331711    4688 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:28:56.338750    4688 start.go:298] selected driver: qemu2
	I0906 12:28:56.338757    4688 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:28:56.338764    4688 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:56.340811    4688 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:28:56.345566    4688 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:28:56.349840    4688 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:56.349874    4688 cni.go:84] Creating CNI manager for ""
	I0906 12:28:56.349884    4688 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:56.349890    4688 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:28:56.349896    4688 start_flags.go:321] config:
	{Name:default-k8s-diff-port-649000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-649000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:56.354145    4688 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:56.362730    4688 out.go:177] * Starting control plane node default-k8s-diff-port-649000 in cluster default-k8s-diff-port-649000
	I0906 12:28:56.366778    4688 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:56.366798    4688 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:56.366817    4688 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:56.366897    4688 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:56.366903    4688 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:56.366967    4688 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/default-k8s-diff-port-649000/config.json ...
	I0906 12:28:56.366982    4688 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/default-k8s-diff-port-649000/config.json: {Name:mkd61c64057bbfc0e7e92d7192b1b06296414332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:56.367189    4688 start.go:365] acquiring machines lock for default-k8s-diff-port-649000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:28:57.785956    4688 start.go:369] acquired machines lock for "default-k8s-diff-port-649000" in 1.418780917s
	I0906 12:28:57.786171    4688 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-649000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:57.786402    4688 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:28:57.795952    4688 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:28:57.841478    4688 start.go:159] libmachine.API.Create for "default-k8s-diff-port-649000" (driver="qemu2")
	I0906 12:28:57.841521    4688 client.go:168] LocalClient.Create starting
	I0906 12:28:57.841640    4688 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:28:57.841688    4688 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:57.841709    4688 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:57.841786    4688 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:28:57.841822    4688 main.go:141] libmachine: Decoding PEM data...
	I0906 12:28:57.841835    4688 main.go:141] libmachine: Parsing certificate...
	I0906 12:28:57.842407    4688 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:28:57.975348    4688 main.go:141] libmachine: Creating SSH key...
	I0906 12:28:58.111816    4688 main.go:141] libmachine: Creating Disk image...
	I0906 12:28:58.111826    4688 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:28:58.111986    4688 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:28:58.129704    4688 main.go:141] libmachine: STDOUT: 
	I0906 12:28:58.129726    4688 main.go:141] libmachine: STDERR: 
	I0906 12:28:58.129782    4688 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2 +20000M
	I0906 12:28:58.139589    4688 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:28:58.139603    4688 main.go:141] libmachine: STDERR: 
	I0906 12:28:58.139624    4688 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:28:58.139640    4688 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:28:58.139683    4688 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:00:12:eb:29:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:28:58.141142    4688 main.go:141] libmachine: STDOUT: 
	I0906 12:28:58.141157    4688 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:28:58.141175    4688 client.go:171] LocalClient.Create took 299.655125ms
	I0906 12:29:00.143296    4688 start.go:128] duration metric: createHost completed in 2.356914334s
	I0906 12:29:00.143364    4688 start.go:83] releasing machines lock for "default-k8s-diff-port-649000", held for 2.357433583s
	W0906 12:29:00.143450    4688 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:00.157823    4688 out.go:177] * Deleting "default-k8s-diff-port-649000" in qemu2 ...
	W0906 12:29:00.181387    4688 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:00.181417    4688 start.go:687] Will try again in 5 seconds ...
	I0906 12:29:05.182412    4688 start.go:365] acquiring machines lock for default-k8s-diff-port-649000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:05.196672    4688 start.go:369] acquired machines lock for "default-k8s-diff-port-649000" in 14.171166ms
	I0906 12:29:05.196726    4688 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-649000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:29:05.196977    4688 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:29:05.204542    4688 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:29:05.249175    4688 start.go:159] libmachine.API.Create for "default-k8s-diff-port-649000" (driver="qemu2")
	I0906 12:29:05.249224    4688 client.go:168] LocalClient.Create starting
	I0906 12:29:05.249342    4688 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:29:05.249402    4688 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:05.249423    4688 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:05.249492    4688 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:29:05.249529    4688 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:05.249541    4688 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:05.250066    4688 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:29:05.378195    4688 main.go:141] libmachine: Creating SSH key...
	I0906 12:29:05.507280    4688 main.go:141] libmachine: Creating Disk image...
	I0906 12:29:05.507291    4688 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:29:05.507462    4688 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:29:05.516317    4688 main.go:141] libmachine: STDOUT: 
	I0906 12:29:05.516344    4688 main.go:141] libmachine: STDERR: 
	I0906 12:29:05.516408    4688 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2 +20000M
	I0906 12:29:05.524718    4688 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:29:05.524737    4688 main.go:141] libmachine: STDERR: 
	I0906 12:29:05.524767    4688 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:29:05.524780    4688 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:29:05.524826    4688 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d4:18:26:5b:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:29:05.526595    4688 main.go:141] libmachine: STDOUT: 
	I0906 12:29:05.526612    4688 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:05.526624    4688 client.go:171] LocalClient.Create took 277.4025ms
	I0906 12:29:07.528931    4688 start.go:128] duration metric: createHost completed in 2.331961s
	I0906 12:29:07.529009    4688 start.go:83] releasing machines lock for "default-k8s-diff-port-649000", held for 2.33237325s
	W0906 12:29:07.529392    4688 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-649000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-649000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:07.545810    4688 out.go:177] 
	W0906 12:29:07.548994    4688 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:07.549033    4688 out.go:239] * 
	* 
	W0906 12:29:07.551885    4688 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:29:07.560588    4688 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (48.806958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-293000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-293000 create -f testdata/busybox.yaml: exit status 1 (31.135667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-293000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (33.056209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (33.231791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-293000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-293000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-293000 describe deploy/metrics-server -n kube-system: exit status 1 (26.706791ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-293000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-293000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (29.5275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
E0906 12:29:00.032247    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.995712958s)

                                                
                                                
-- stdout --
	* [embed-certs-293000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-293000 in cluster embed-certs-293000
	* Restarting existing qemu2 VM for "embed-certs-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:28:58.263243    4718 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:28:58.263372    4718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:58.263374    4718 out.go:309] Setting ErrFile to fd 2...
	I0906 12:28:58.263376    4718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:28:58.263480    4718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:28:58.264485    4718 out.go:303] Setting JSON to false
	I0906 12:28:58.279323    4718 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1712,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:28:58.279402    4718 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:28:58.282927    4718 out.go:177] * [embed-certs-293000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:28:58.290993    4718 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:28:58.294940    4718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:28:58.291055    4718 notify.go:220] Checking for updates...
	I0906 12:28:58.301957    4718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:28:58.304994    4718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:28:58.307973    4718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:28:58.311037    4718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:28:58.314203    4718 config.go:182] Loaded profile config "embed-certs-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:28:58.314443    4718 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:28:58.318995    4718 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:28:58.325938    4718 start.go:298] selected driver: qemu2
	I0906 12:28:58.325945    4718 start.go:902] validating driver "qemu2" against &{Name:embed-certs-293000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-293000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:58.326026    4718 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:28:58.328328    4718 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:28:58.328364    4718 cni.go:84] Creating CNI manager for ""
	I0906 12:28:58.328371    4718 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:58.328376    4718 start_flags.go:321] config:
	{Name:embed-certs-293000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-293000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:28:58.333063    4718 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:28:58.339833    4718 out.go:177] * Starting control plane node embed-certs-293000 in cluster embed-certs-293000
	I0906 12:28:58.343888    4718 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:28:58.343908    4718 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:28:58.343925    4718 cache.go:57] Caching tarball of preloaded images
	I0906 12:28:58.343985    4718 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:28:58.343991    4718 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:28:58.344070    4718 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/embed-certs-293000/config.json ...
	I0906 12:28:58.344434    4718 start.go:365] acquiring machines lock for embed-certs-293000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:00.143494    4718 start.go:369] acquired machines lock for "embed-certs-293000" in 1.799068625s
	I0906 12:29:00.143657    4718 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:00.143678    4718 fix.go:54] fixHost starting: 
	I0906 12:29:00.144370    4718 fix.go:102] recreateIfNeeded on embed-certs-293000: state=Stopped err=<nil>
	W0906 12:29:00.144417    4718 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:00.149938    4718 out.go:177] * Restarting existing qemu2 VM for "embed-certs-293000" ...
	I0906 12:29:00.162043    4718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3e:be:86:83:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:29:00.171534    4718 main.go:141] libmachine: STDOUT: 
	I0906 12:29:00.171593    4718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:00.171704    4718 fix.go:56] fixHost completed within 28.021375ms
	I0906 12:29:00.171720    4718 start.go:83] releasing machines lock for "embed-certs-293000", held for 28.198958ms
	W0906 12:29:00.171748    4718 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:00.171910    4718 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:00.171926    4718 start.go:687] Will try again in 5 seconds ...
	I0906 12:29:05.173661    4718 start.go:365] acquiring machines lock for embed-certs-293000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:05.174157    4718 start.go:369] acquired machines lock for "embed-certs-293000" in 399.333µs
	I0906 12:29:05.174314    4718 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:05.174337    4718 fix.go:54] fixHost starting: 
	I0906 12:29:05.175134    4718 fix.go:102] recreateIfNeeded on embed-certs-293000: state=Stopped err=<nil>
	W0906 12:29:05.175160    4718 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:05.183594    4718 out.go:177] * Restarting existing qemu2 VM for "embed-certs-293000" ...
	I0906 12:29:05.186785    4718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3e:be:86:83:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/embed-certs-293000/disk.qcow2
	I0906 12:29:05.196379    4718 main.go:141] libmachine: STDOUT: 
	I0906 12:29:05.196442    4718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:05.196542    4718 fix.go:56] fixHost completed within 22.205708ms
	I0906 12:29:05.196568    4718 start.go:83] releasing machines lock for "embed-certs-293000", held for 22.383166ms
	W0906 12:29:05.196827    4718 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:05.207614    4718 out.go:177] 
	W0906 12:29:05.211667    4718 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:05.211714    4718 out.go:239] * 
	* 
	W0906 12:29:05.214698    4718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:29:05.222490    4718 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-293000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (50.053458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-293000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (34.203625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-293000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.638959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-293000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-293000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (32.98125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-293000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-293000 "sudo crictl images -o json": exit status 89 (39.408125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-293000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-293000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-293000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (28.776167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-293000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-293000 --alsologtostderr -v=1: exit status 89 (40.213583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-293000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:05.477417    4738 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:05.477563    4738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:05.477566    4738 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:05.477569    4738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:05.477679    4738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:05.477893    4738 out.go:303] Setting JSON to false
	I0906 12:29:05.477901    4738 mustload.go:65] Loading cluster: embed-certs-293000
	I0906 12:29:05.478071    4738 config.go:182] Loaded profile config "embed-certs-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:05.482548    4738 out.go:177] * The control plane node must be running for this command
	I0906 12:29:05.485536    4738 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-293000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-293000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (29.126291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (28.459708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.478285875s)

                                                
                                                
-- stdout --
	* [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:05.944048    4764 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:05.944169    4764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:05.944172    4764 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:05.944175    4764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:05.944286    4764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:05.945328    4764 out.go:303] Setting JSON to false
	I0906 12:29:05.960516    4764 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1719,"bootTime":1694026826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:29:05.960598    4764 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:29:05.964674    4764 out.go:177] * [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:29:05.973640    4764 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:29:05.973692    4764 notify.go:220] Checking for updates...
	I0906 12:29:05.980696    4764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:29:05.983681    4764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:29:05.986679    4764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:29:05.989671    4764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:29:05.992659    4764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:29:05.996014    4764 config.go:182] Loaded profile config "default-k8s-diff-port-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:05.996078    4764 config.go:182] Loaded profile config "multinode-122000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:05.996119    4764 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:29:06.000673    4764 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:29:06.007611    4764 start.go:298] selected driver: qemu2
	I0906 12:29:06.007621    4764 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:29:06.007628    4764 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:29:06.009554    4764 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0906 12:29:06.009578    4764 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0906 12:29:06.016557    4764 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:29:06.019742    4764 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 12:29:06.019772    4764 cni.go:84] Creating CNI manager for ""
	I0906 12:29:06.019780    4764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:29:06.019784    4764 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:29:06.019790    4764 start_flags.go:321] config:
	{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:29:06.023881    4764 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:29:06.030602    4764 out.go:177] * Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	I0906 12:29:06.034661    4764 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:29:06.034680    4764 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:29:06.034695    4764 cache.go:57] Caching tarball of preloaded images
	I0906 12:29:06.034756    4764 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:29:06.034762    4764 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:29:06.034838    4764 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/newest-cni-401000/config.json ...
	I0906 12:29:06.034866    4764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/newest-cni-401000/config.json: {Name:mke58983b7dd57046ffcf51b5197a2501bf1a90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:29:06.035079    4764 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:07.529128    4764 start.go:369] acquired machines lock for "newest-cni-401000" in 1.494045292s
	I0906 12:29:07.529320    4764 start.go:93] Provisioning new machine with config: &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:29:07.529587    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:29:07.540370    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:29:07.589371    4764 start.go:159] libmachine.API.Create for "newest-cni-401000" (driver="qemu2")
	I0906 12:29:07.589407    4764 client.go:168] LocalClient.Create starting
	I0906 12:29:07.589528    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:29:07.589578    4764 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:07.589597    4764 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:07.589671    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:29:07.589707    4764 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:07.589721    4764 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:07.590265    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:29:07.722642    4764 main.go:141] libmachine: Creating SSH key...
	I0906 12:29:07.774357    4764 main.go:141] libmachine: Creating Disk image...
	I0906 12:29:07.774366    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:29:07.774497    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:07.783440    4764 main.go:141] libmachine: STDOUT: 
	I0906 12:29:07.783466    4764 main.go:141] libmachine: STDERR: 
	I0906 12:29:07.783540    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2 +20000M
	I0906 12:29:07.791248    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:29:07.791268    4764 main.go:141] libmachine: STDERR: 
	I0906 12:29:07.791286    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:07.791297    4764 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:29:07.791337    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:fb:22:53:84:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:07.792941    4764 main.go:141] libmachine: STDOUT: 
	I0906 12:29:07.792955    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:07.792975    4764 client.go:171] LocalClient.Create took 203.55225ms
	I0906 12:29:09.795192    4764 start.go:128] duration metric: createHost completed in 2.265626459s
	I0906 12:29:09.795273    4764 start.go:83] releasing machines lock for "newest-cni-401000", held for 2.266148083s
	W0906 12:29:09.795376    4764 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:09.810048    4764 out.go:177] * Deleting "newest-cni-401000" in qemu2 ...
	W0906 12:29:09.833873    4764 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:09.833903    4764 start.go:687] Will try again in 5 seconds ...
	I0906 12:29:14.835969    4764 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:14.850801    4764 start.go:369] acquired machines lock for "newest-cni-401000" in 14.738708ms
	I0906 12:29:14.850877    4764 start.go:93] Provisioning new machine with config: &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:29:14.851186    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:29:14.859935    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:29:14.903163    4764 start.go:159] libmachine.API.Create for "newest-cni-401000" (driver="qemu2")
	I0906 12:29:14.903198    4764 client.go:168] LocalClient.Create starting
	I0906 12:29:14.903339    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/ca.pem
	I0906 12:29:14.903392    4764 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:14.903412    4764 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:14.903476    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17116-1006/.minikube/certs/cert.pem
	I0906 12:29:14.903512    4764 main.go:141] libmachine: Decoding PEM data...
	I0906 12:29:14.903526    4764 main.go:141] libmachine: Parsing certificate...
	I0906 12:29:14.904006    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 12:29:15.062028    4764 main.go:141] libmachine: Creating SSH key...
	I0906 12:29:15.332104    4764 main.go:141] libmachine: Creating Disk image...
	I0906 12:29:15.332113    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:29:15.332268    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:15.341216    4764 main.go:141] libmachine: STDOUT: 
	I0906 12:29:15.341235    4764 main.go:141] libmachine: STDERR: 
	I0906 12:29:15.341313    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2 +20000M
	I0906 12:29:15.349337    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:29:15.349351    4764 main.go:141] libmachine: STDERR: 
	I0906 12:29:15.349366    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:15.349374    4764 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:29:15.349417    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b3:fa:1b:35:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:15.351107    4764 main.go:141] libmachine: STDOUT: 
	I0906 12:29:15.351124    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:15.351136    4764 client.go:171] LocalClient.Create took 447.910417ms
	I0906 12:29:17.353285    4764 start.go:128] duration metric: createHost completed in 2.502134792s
	I0906 12:29:17.353367    4764 start.go:83] releasing machines lock for "newest-cni-401000", held for 2.502605875s
	W0906 12:29:17.353814    4764 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:17.363558    4764 out.go:177] 
	W0906 12:29:17.367589    4764 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:17.367612    4764 out.go:239] * 
	* 
	W0906 12:29:17.370169    4764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:29:17.378591    4764 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (68.516333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-649000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-649000 create -f testdata/busybox.yaml: exit status 1 (29.1175ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-649000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (31.411167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (31.759541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-649000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-649000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-649000 describe deploy/metrics-server -n kube-system: exit status 1 (26.287625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-649000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-649000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (26.881125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.917340958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-649000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-649000 in cluster default-k8s-diff-port-649000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-649000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-649000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:08.001813    4802 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:08.001910    4802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:08.001912    4802 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:08.001915    4802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:08.002022    4802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:08.002906    4802 out.go:303] Setting JSON to false
	I0906 12:29:08.017497    4802 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1722,"bootTime":1694026826,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:29:08.017568    4802 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:29:08.023760    4802 out.go:177] * [default-k8s-diff-port-649000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:29:08.030807    4802 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:29:08.030858    4802 notify.go:220] Checking for updates...
	I0906 12:29:08.033698    4802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:29:08.037709    4802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:29:08.039044    4802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:29:08.041713    4802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:29:08.044737    4802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:29:08.048371    4802 config.go:182] Loaded profile config "default-k8s-diff-port-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:08.049040    4802 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:29:08.052691    4802 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:29:08.059734    4802 start.go:298] selected driver: qemu2
	I0906 12:29:08.059741    4802 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-649000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:29:08.059793    4802 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:29:08.061706    4802 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:29:08.061730    4802 cni.go:84] Creating CNI manager for ""
	I0906 12:29:08.061736    4802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:29:08.061742    4802 start_flags.go:321] config:
	{Name:default-k8s-diff-port-649000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-6490
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:29:08.065529    4802 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:29:08.072720    4802 out.go:177] * Starting control plane node default-k8s-diff-port-649000 in cluster default-k8s-diff-port-649000
	I0906 12:29:08.076676    4802 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:29:08.076697    4802 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:29:08.076715    4802 cache.go:57] Caching tarball of preloaded images
	I0906 12:29:08.076764    4802 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:29:08.076770    4802 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:29:08.076829    4802 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/default-k8s-diff-port-649000/config.json ...
	I0906 12:29:08.077200    4802 start.go:365] acquiring machines lock for default-k8s-diff-port-649000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:09.795457    4802 start.go:369] acquired machines lock for "default-k8s-diff-port-649000" in 1.718227375s
	I0906 12:29:09.795598    4802 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:09.795633    4802 fix.go:54] fixHost starting: 
	I0906 12:29:09.796380    4802 fix.go:102] recreateIfNeeded on default-k8s-diff-port-649000: state=Stopped err=<nil>
	W0906 12:29:09.796437    4802 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:09.803124    4802 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-649000" ...
	I0906 12:29:09.814135    4802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d4:18:26:5b:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:29:09.823787    4802 main.go:141] libmachine: STDOUT: 
	I0906 12:29:09.823867    4802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:09.824021    4802 fix.go:56] fixHost completed within 28.388709ms
	I0906 12:29:09.824046    4802 start.go:83] releasing machines lock for "default-k8s-diff-port-649000", held for 28.550834ms
	W0906 12:29:09.824085    4802 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:09.824287    4802 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:09.824307    4802 start.go:687] Will try again in 5 seconds ...
	I0906 12:29:14.826368    4802 start.go:365] acquiring machines lock for default-k8s-diff-port-649000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:14.826985    4802 start.go:369] acquired machines lock for "default-k8s-diff-port-649000" in 469.208µs
	I0906 12:29:14.827170    4802 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:14.827194    4802 fix.go:54] fixHost starting: 
	I0906 12:29:14.828075    4802 fix.go:102] recreateIfNeeded on default-k8s-diff-port-649000: state=Stopped err=<nil>
	W0906 12:29:14.828103    4802 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:14.836893    4802 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-649000" ...
	I0906 12:29:14.841139    4802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d4:18:26:5b:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/default-k8s-diff-port-649000/disk.qcow2
	I0906 12:29:14.850525    4802 main.go:141] libmachine: STDOUT: 
	I0906 12:29:14.850590    4802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:14.850681    4802 fix.go:56] fixHost completed within 23.490375ms
	I0906 12:29:14.850704    4802 start.go:83] releasing machines lock for "default-k8s-diff-port-649000", held for 23.672125ms
	W0906 12:29:14.850990    4802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-649000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-649000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:14.867908    4802 out.go:177] 
	W0906 12:29:14.873012    4802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:14.873029    4802 out.go:239] * 
	* 
	W0906 12:29:14.874442    4802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:29:14.883938    4802 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-649000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (46.465208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-649000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (33.165709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-649000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-649000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-649000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.844667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-649000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-649000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (50.310542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-649000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-649000 "sudo crictl images -o json": exit status 89 (41.943292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-649000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-649000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-649000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (29.84775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-649000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-649000 --alsologtostderr -v=1: exit status 89 (41.767416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-649000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:15.155722    4825 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:15.155885    4825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:15.155893    4825 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:15.155896    4825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:15.156002    4825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:15.156202    4825 out.go:303] Setting JSON to false
	I0906 12:29:15.156210    4825 mustload.go:65] Loading cluster: default-k8s-diff-port-649000
	I0906 12:29:15.156387    4825 config.go:182] Loaded profile config "default-k8s-diff-port-649000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:15.160924    4825 out.go:177] * The control plane node must be running for this command
	I0906 12:29:15.165023    4825 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-649000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-649000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (29.309334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (29.069792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-649000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.172004708s)

                                                
                                                
-- stdout --
	* [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	* Restarting existing qemu2 VM for "newest-cni-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:17.706451    4861 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:17.706570    4861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:17.706572    4861 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:17.706575    4861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:17.706681    4861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:17.707674    4861 out.go:303] Setting JSON to false
	I0906 12:29:17.722783    4861 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1731,"bootTime":1694026826,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:29:17.722851    4861 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:29:17.727395    4861 out.go:177] * [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:29:17.734344    4861 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:29:17.734410    4861 notify.go:220] Checking for updates...
	I0906 12:29:17.738321    4861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:29:17.739664    4861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:29:17.742287    4861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:29:17.745321    4861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:29:17.748399    4861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:29:17.751594    4861 config.go:182] Loaded profile config "newest-cni-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:17.751825    4861 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:29:17.756320    4861 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:29:17.763288    4861 start.go:298] selected driver: qemu2
	I0906 12:29:17.763292    4861 start.go:902] validating driver "qemu2" against &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:29:17.763339    4861 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:29:17.765289    4861 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 12:29:17.765317    4861 cni.go:84] Creating CNI manager for ""
	I0906 12:29:17.765325    4861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:29:17.765332    4861 start_flags.go:321] config:
	{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:29:17.769215    4861 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:29:17.775287    4861 out.go:177] * Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	I0906 12:29:17.779240    4861 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:29:17.779254    4861 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:29:17.779266    4861 cache.go:57] Caching tarball of preloaded images
	I0906 12:29:17.779314    4861 preload.go:174] Found /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:29:17.779322    4861 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:29:17.779383    4861 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/newest-cni-401000/config.json ...
	I0906 12:29:17.779663    4861 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:17.779692    4861 start.go:369] acquired machines lock for "newest-cni-401000" in 22.791µs
	I0906 12:29:17.779701    4861 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:17.779706    4861 fix.go:54] fixHost starting: 
	I0906 12:29:17.779818    4861 fix.go:102] recreateIfNeeded on newest-cni-401000: state=Stopped err=<nil>
	W0906 12:29:17.779826    4861 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:17.783291    4861 out.go:177] * Restarting existing qemu2 VM for "newest-cni-401000" ...
	I0906 12:29:17.791349    4861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b3:fa:1b:35:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:17.793378    4861 main.go:141] libmachine: STDOUT: 
	I0906 12:29:17.793395    4861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:17.793423    4861 fix.go:56] fixHost completed within 13.716916ms
	I0906 12:29:17.793461    4861 start.go:83] releasing machines lock for "newest-cni-401000", held for 13.76525ms
	W0906 12:29:17.793469    4861 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:17.793503    4861 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:17.793507    4861 start.go:687] Will try again in 5 seconds ...
	I0906 12:29:22.795597    4861 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mkcd55fe6404821d1fd04819f324aec293fbf60b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:29:22.796358    4861 start.go:369] acquired machines lock for "newest-cni-401000" in 637.5µs
	I0906 12:29:22.796521    4861 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:29:22.796540    4861 fix.go:54] fixHost starting: 
	I0906 12:29:22.797322    4861 fix.go:102] recreateIfNeeded on newest-cni-401000: state=Stopped err=<nil>
	W0906 12:29:22.797348    4861 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 12:29:22.802839    4861 out.go:177] * Restarting existing qemu2 VM for "newest-cni-401000" ...
	I0906 12:29:22.810049    4861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b3:fa:1b:35:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 12:29:22.819576    4861 main.go:141] libmachine: STDOUT: 
	I0906 12:29:22.819627    4861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:29:22.819701    4861 fix.go:56] fixHost completed within 23.163ms
	I0906 12:29:22.819723    4861 start.go:83] releasing machines lock for "newest-cni-401000", held for 23.342334ms
	W0906 12:29:22.819918    4861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:29:22.827750    4861 out.go:177] 
	W0906 12:29:22.831839    4861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:29:22.831870    4861 out.go:239] * 
	* 
	W0906 12:29:22.835019    4861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:29:22.841893    4861 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (67.469458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-401000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-401000 "sudo crictl images -o json": exit status 89 (44.355917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-401000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-401000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.407916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1: exit status 89 (41.287084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:29:23.022038    4875 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:29:23.022198    4875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:23.022201    4875 out.go:309] Setting ErrFile to fd 2...
	I0906 12:29:23.022204    4875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:29:23.022318    4875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:29:23.022534    4875 out.go:303] Setting JSON to false
	I0906 12:29:23.022542    4875 mustload.go:65] Loading cluster: newest-cni-401000
	I0906 12:29:23.022723    4875 config.go:182] Loaded profile config "newest-cni-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:29:23.027142    4875 out.go:177] * The control plane node must be running for this command
	I0906 12:29:23.031319    4875 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.001875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (28.671292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.1/json-events 15.46
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.39
30 TestHyperKitDriverInstallOrUpdate 8.3
33 TestErrorSpam/setup 29.73
34 TestErrorSpam/start 0.33
35 TestErrorSpam/status 0.26
36 TestErrorSpam/pause 0.65
37 TestErrorSpam/unpause 0.62
38 TestErrorSpam/stop 3.23
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 84.38
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 34.8
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.5
50 TestFunctional/serial/CacheCmd/cache/add_local 1.12
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.93
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.4
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 35.41
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.63
61 TestFunctional/serial/LogsFileCmd 0.58
62 TestFunctional/serial/InvalidService 4.14
64 TestFunctional/parallel/ConfigCmd 0.2
65 TestFunctional/parallel/DashboardCmd 7.9
66 TestFunctional/parallel/DryRun 0.22
67 TestFunctional/parallel/InternationalLanguage 0.1
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 24.39
76 TestFunctional/parallel/SSHCmd 0.14
77 TestFunctional/parallel/CpCmd 0.29
79 TestFunctional/parallel/FileSync 0.07
80 TestFunctional/parallel/CertSync 0.44
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
88 TestFunctional/parallel/License 0.19
89 TestFunctional/parallel/Version/short 0.04
90 TestFunctional/parallel/Version/components 0.22
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
95 TestFunctional/parallel/ImageCommands/ImageBuild 2.02
96 TestFunctional/parallel/ImageCommands/Setup 1.51
97 TestFunctional/parallel/DockerEnv/bash 0.4
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
101 TestFunctional/parallel/ServiceCmd/DeployApp 13.13
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.27
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.72
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
114 TestFunctional/parallel/ServiceCmd/List 0.11
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
117 TestFunctional/parallel/ServiceCmd/Format 0.11
118 TestFunctional/parallel/ServiceCmd/URL 0.11
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
126 TestFunctional/parallel/ProfileCmd/profile_list 0.15
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
128 TestFunctional/parallel/MountCmd/any-port 5.12
129 TestFunctional/parallel/MountCmd/specific-port 1.22
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 30.53
138 TestImageBuild/serial/NormalBuild 1.03
140 TestImageBuild/serial/BuildWithDockerIgnore 0.13
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
144 TestIngressAddonLegacy/StartLegacyK8sCluster 63.12
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.82
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.22
151 TestJSONOutput/start/Command 44.58
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.28
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.22
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.32
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 61.29
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.14
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
273 TestStartStop/group/no-preload/serial/Stop 0.06
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-264000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-264000: exit status 85 (97.215542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-264000 | jenkins | v1.31.2 | 06 Sep 23 12:09 PDT |          |
	|         | -p download-only-264000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:09:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:09:20.894122    1423 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:09:20.894234    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:20.894237    1423 out.go:309] Setting ErrFile to fd 2...
	I0906 12:09:20.894239    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:20.894343    1423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	W0906 12:09:20.894398    1423 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: no such file or directory
	I0906 12:09:20.895557    1423 out.go:303] Setting JSON to true
	I0906 12:09:20.912826    1423 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":534,"bootTime":1694026826,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:09:20.912887    1423 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:09:20.918164    1423 out.go:97] [download-only-264000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:09:20.918305    1423 notify.go:220] Checking for updates...
	I0906 12:09:20.922281    1423 out.go:169] MINIKUBE_LOCATION=17116
	W0906 12:09:20.918493    1423 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 12:09:20.930265    1423 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:09:20.934236    1423 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:09:20.937302    1423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:09:20.940325    1423 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	W0906 12:09:20.946193    1423 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 12:09:20.946376    1423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:09:20.952171    1423 out.go:97] Using the qemu2 driver based on user configuration
	I0906 12:09:20.952179    1423 start.go:298] selected driver: qemu2
	I0906 12:09:20.952184    1423 start.go:902] validating driver "qemu2" against <nil>
	I0906 12:09:20.952270    1423 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 12:09:20.956388    1423 out.go:169] Automatically selected the socket_vmnet network
	I0906 12:09:20.961991    1423 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 12:09:20.962080    1423 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:09:20.962146    1423 cni.go:84] Creating CNI manager for ""
	I0906 12:09:20.962161    1423 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:09:20.962164    1423 start_flags.go:321] config:
	{Name:download-only-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:09:20.967687    1423 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:09:20.972585    1423 out.go:97] Downloading VM boot image ...
	I0906 12:09:20.972611    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0906 12:09:26.736143    1423 out.go:97] Starting control plane node download-only-264000 in cluster download-only-264000
	I0906 12:09:26.736171    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:26.793553    1423 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:09:26.793640    1423 cache.go:57] Caching tarball of preloaded images
	I0906 12:09:26.793805    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:26.797892    1423 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 12:09:26.797899    1423 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:26.874709    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 12:09:34.090349    1423 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:34.090482    1423 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:34.730353    1423 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 12:09:34.730535    1423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/download-only-264000/config.json ...
	I0906 12:09:34.730554    1423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/download-only-264000/config.json: {Name:mk223a71e1db329594e19bcb005209a7e85e101d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:09:34.730770    1423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 12:09:34.730936    1423 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0906 12:09:35.081317    1423 out.go:169] 
	W0906 12:09:35.085222    1423 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68 0x10803df68] Decompressors:map[bz2:0x14000653ce0 gz:0x14000653ce8 tar:0x14000653c90 tar.bz2:0x14000653ca0 tar.gz:0x14000653cb0 tar.xz:0x14000653cc0 tar.zst:0x14000653cd0 tbz2:0x14000653ca0 tgz:0x14000653cb0 txz:0x14000653cc0 tzst:0x14000653cd0 xz:0x14000653cf0 zip:0x14000653d00 zst:0x14000653cf8] Getters:map[file:0x140011b6690 http:0x140011d2140 https:0x140011d2190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 12:09:35.085253    1423 out_reason.go:110] 
	W0906 12:09:35.091288    1423 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:09:35.094188    1423 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-264000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (15.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-264000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-264000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (15.457650625s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (15.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-264000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-264000: exit status 85 (76.40825ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-264000 | jenkins | v1.31.2 | 06 Sep 23 12:09 PDT |          |
	|         | -p download-only-264000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-264000 | jenkins | v1.31.2 | 06 Sep 23 12:09 PDT |          |
	|         | -p download-only-264000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 12:09:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:09:35.282762    1436 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:09:35.282901    1436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:35.282904    1436 out.go:309] Setting ErrFile to fd 2...
	I0906 12:09:35.282906    1436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:09:35.283007    1436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	W0906 12:09:35.283065    1436 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17116-1006/.minikube/config/config.json: no such file or directory
	I0906 12:09:35.283963    1436 out.go:303] Setting JSON to true
	I0906 12:09:35.298916    1436 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":549,"bootTime":1694026826,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:09:35.298986    1436 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:09:35.304341    1436 out.go:97] [download-only-264000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:09:35.308315    1436 out.go:169] MINIKUBE_LOCATION=17116
	I0906 12:09:35.304444    1436 notify.go:220] Checking for updates...
	I0906 12:09:35.316298    1436 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:09:35.317801    1436 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:09:35.321291    1436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:09:35.324293    1436 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	W0906 12:09:35.330320    1436 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 12:09:35.330596    1436 config.go:182] Loaded profile config "download-only-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0906 12:09:35.330633    1436 start.go:810] api.Load failed for download-only-264000: filestore "download-only-264000": Docker machine "download-only-264000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 12:09:35.330674    1436 driver.go:373] Setting default libvirt URI to qemu:///system
	W0906 12:09:35.330688    1436 start.go:810] api.Load failed for download-only-264000: filestore "download-only-264000": Docker machine "download-only-264000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 12:09:35.334411    1436 out.go:97] Using the qemu2 driver based on existing profile
	I0906 12:09:35.334418    1436 start.go:298] selected driver: qemu2
	I0906 12:09:35.334420    1436 start.go:902] validating driver "qemu2" against &{Name:download-only-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:09:35.336282    1436 cni.go:84] Creating CNI manager for ""
	I0906 12:09:35.336294    1436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:09:35.336301    1436 start_flags.go:321] config:
	{Name:download-only-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-264000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:09:35.340123    1436 iso.go:125] acquiring lock: {Name:mk3f7665f27187d07892ba5c4ce3c49cda04e887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:09:35.343331    1436 out.go:97] Starting control plane node download-only-264000 in cluster download-only-264000
	I0906 12:09:35.343338    1436 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:09:35.393337    1436 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:09:35.393358    1436 cache.go:57] Caching tarball of preloaded images
	I0906 12:09:35.393546    1436 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:09:35.397319    1436 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0906 12:09:35.397325    1436 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:35.479832    1436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 12:09:43.722823    1436 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:43.722963    1436 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 12:09:44.304276    1436 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 12:09:44.304362    1436 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/download-only-264000/config.json ...
	I0906 12:09:44.304634    1436 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 12:09:44.304797    1436 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17116-1006/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-264000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-264000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-148000 --alsologtostderr --binary-mirror http://127.0.0.1:49367 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-148000
--- PASS: TestBinaryMirror (0.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.3s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.30s)

                                                
                                    
x
+
TestErrorSpam/setup (29.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-495000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-495000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 --driver=qemu2 : (29.727445584s)
--- PASS: TestErrorSpam/setup (29.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 stop: (3.066864875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-495000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-495000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17116-1006/.minikube/files/etc/test/nested/copy/1421/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-779000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m24.381089584s)
--- PASS: TestFunctional/serial/StartWithProxy (84.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-779000 --alsologtostderr -v=8: (34.801946458s)
functional_test.go:659: soft start took 34.80229975s for "functional-779000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-779000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:3.1: (1.242344417s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:3.3: (1.164821459s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 cache add registry.k8s.io/pause:latest: (1.090265375s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3857241657/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache add minikube-local-cache-test:functional-779000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache delete minikube-local-cache-test:functional-779000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-779000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.548958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 kubectl -- --context functional-779000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-779000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-779000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.410919s)
functional_test.go:757: restart took 35.411028209s for "functional-779000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-779000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2175302672/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-779000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-779000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-779000: exit status 115 (109.593916ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32110 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-779000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 config get cpus: exit status 14 (28.952875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 config get cpus: exit status 14 (28.6095ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-779000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-779000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2125: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-779000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.893625ms)

                                                
                                                
-- stdout --
	* [functional-779000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:51.201282    2108 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:14:51.201397    2108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:51.201400    2108 out.go:309] Setting ErrFile to fd 2...
	I0906 12:14:51.201402    2108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:51.201516    2108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:14:51.202848    2108 out.go:303] Setting JSON to false
	I0906 12:14:51.219868    2108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":865,"bootTime":1694026826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:14:51.219926    2108 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:14:51.223706    2108 out.go:177] * [functional-779000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 12:14:51.230547    2108 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:14:51.233646    2108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:14:51.230636    2108 notify.go:220] Checking for updates...
	I0906 12:14:51.239508    2108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:14:51.242558    2108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:14:51.245518    2108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:14:51.248536    2108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:14:51.251757    2108 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:14:51.251980    2108 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:14:51.255511    2108 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:14:51.262568    2108 start.go:298] selected driver: qemu2
	I0906 12:14:51.262573    2108 start.go:902] validating driver "qemu2" against &{Name:functional-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:14:51.262618    2108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:14:51.268530    2108 out.go:177] 
	W0906 12:14:51.272521    2108 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 12:14:51.275444    2108 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-779000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-779000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (104.748917ms)

                                                
                                                
-- stdout --
	* [functional-779000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:51.412820    2119 out.go:296] Setting OutFile to fd 1 ...
	I0906 12:14:51.412919    2119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:51.412922    2119 out.go:309] Setting ErrFile to fd 2...
	I0906 12:14:51.412924    2119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:51.413050    2119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
	I0906 12:14:51.414361    2119 out.go:303] Setting JSON to false
	I0906 12:14:51.430553    2119 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":865,"bootTime":1694026826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:14:51.430616    2119 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 12:14:51.434493    2119 out.go:177] * [functional-779000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	I0906 12:14:51.440585    2119 out.go:177]   - MINIKUBE_LOCATION=17116
	I0906 12:14:51.444538    2119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	I0906 12:14:51.440593    2119 notify.go:220] Checking for updates...
	I0906 12:14:51.450575    2119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:14:51.453585    2119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:14:51.456556    2119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	I0906 12:14:51.459566    2119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:14:51.462683    2119 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 12:14:51.462913    2119 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 12:14:51.467500    2119 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0906 12:14:51.474523    2119 start.go:298] selected driver: qemu2
	I0906 12:14:51.474528    2119 start.go:902] validating driver "qemu2" against &{Name:functional-779000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 12:14:51.474579    2119 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:14:51.480472    2119 out.go:177] 
	W0906 12:14:51.484402    2119 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 12:14:51.488488    2119 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8ba268d9-d706-43b1-b613-105f8077cb20] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007017959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-779000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-779000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-779000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-779000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c902af51-7cf4-4466-b98e-a2bde83f0b06] Pending
helpers_test.go:344: "sp-pod" [c902af51-7cf4-4466-b98e-a2bde83f0b06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c902af51-7cf4-4466-b98e-a2bde83f0b06] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009035125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-779000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-779000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-779000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9e9692df-2bf8-49ea-807b-2a5deee34b11] Pending
helpers_test.go:344: "sp-pod" [9e9692df-2bf8-49ea-807b-2a5deee34b11] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9e9692df-2bf8-49ea-807b-2a5deee34b11] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007967209s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-779000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh -n functional-779000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 cp functional-779000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3958114726/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh -n functional-779000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1421/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /etc/test/nested/copy/1421/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1421.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /etc/ssl/certs/1421.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1421.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /usr/share/ca-certificates/1421.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /etc/ssl/certs/14212.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14212.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /usr/share/ca-certificates/14212.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-779000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "sudo systemctl is-active crio": exit status 1 (97.198959ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-779000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-779000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-779000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-779000 image ls --format short --alsologtostderr:
I0906 12:14:56.334240    2149 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:56.334416    2149 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.334423    2149 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:56.334425    2149 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.334544    2149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:56.334986    2149 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.335056    2149 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.336129    2149 ssh_runner.go:195] Run: systemctl --version
I0906 12:14:56.336139    2149 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
I0906 12:14:56.372809    2149 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-779000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/google-containers/addon-resizer      | functional-779000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| docker.io/library/nginx                     | latest            | ab73c7fd67234 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-779000 | 7c442dfc2ac86 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-779000 image ls --format table --alsologtostderr:
I0906 12:14:56.510822    2153 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:56.510986    2153 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.510989    2153 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:56.510992    2153 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.511107    2153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:56.511527    2153 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.511592    2153 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.512354    2153 ssh_runner.go:195] Run: systemctl --version
I0906 12:14:56.512365    2153 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
I0906 12:14:56.544788    2153 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-779000 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-779000"],"size":"32900000"},{"id":"
1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
"repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7c442dfc2ac867d6bf1f0eb8b4d435dcd8cf7282083e6c9193d474cfe0941b9a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-779000"],"size":"30"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-779000 image ls --format json --alsologtostderr:
I0906 12:14:56.427225    2151 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:56.427384    2151 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.427388    2151 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:56.427390    2151 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.427618    2151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:56.428153    2151 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.428240    2151 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.429184    2151 ssh_runner.go:195] Run: systemctl --version
I0906 12:14:56.429194    2151 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
I0906 12:14:56.463117    2151 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-779000 image ls --format yaml --alsologtostderr:
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-779000
size: "32900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7c442dfc2ac867d6bf1f0eb8b4d435dcd8cf7282083e6c9193d474cfe0941b9a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-779000
size: "30"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-779000 image ls --format yaml --alsologtostderr:
I0906 12:14:56.255158    2147 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:56.255291    2147 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.255294    2147 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:56.255297    2147 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.255403    2147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:56.255774    2147 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.255833    2147 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.256630    2147 ssh_runner.go:195] Run: systemctl --version
I0906 12:14:56.256639    2147 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
I0906 12:14:56.288322    2147 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh pgrep buildkitd: exit status 1 (65.933916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image build -t localhost/my-image:functional-779000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 image build -t localhost/my-image:functional-779000 testdata/build --alsologtostderr: (1.875854125s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-779000 image build -t localhost/my-image:functional-779000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 2c4e2037b185
Removing intermediate container 2c4e2037b185
---> fa83c7dae285
Step 3/3 : ADD content.txt /
---> e6ce3d6088d9
Successfully built e6ce3d6088d9
Successfully tagged localhost/my-image:functional-779000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-779000 image build -t localhost/my-image:functional-779000 testdata/build --alsologtostderr:
I0906 12:14:56.663061    2157 out.go:296] Setting OutFile to fd 1 ...
I0906 12:14:56.663256    2157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.663259    2157 out.go:309] Setting ErrFile to fd 2...
I0906 12:14:56.663262    2157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 12:14:56.663380    2157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17116-1006/.minikube/bin
I0906 12:14:56.663774    2157 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.664152    2157 config.go:182] Loaded profile config "functional-779000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 12:14:56.665057    2157 ssh_runner.go:195] Run: systemctl --version
I0906 12:14:56.665067    2157 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/id_rsa Username:docker}
I0906 12:14:56.697843    2157 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1903339706.tar
I0906 12:14:56.697916    2157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 12:14:56.701443    2157 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1903339706.tar
I0906 12:14:56.703131    2157 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1903339706.tar: stat -c "%s %y" /var/lib/minikube/build/build.1903339706.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1903339706.tar': No such file or directory
I0906 12:14:56.703154    2157 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1903339706.tar --> /var/lib/minikube/build/build.1903339706.tar (3072 bytes)
I0906 12:14:56.711207    2157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1903339706
I0906 12:14:56.713947    2157 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1903339706 -xf /var/lib/minikube/build/build.1903339706.tar
I0906 12:14:56.717492    2157 docker.go:339] Building image: /var/lib/minikube/build/build.1903339706
I0906 12:14:56.717537    2157 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-779000 /var/lib/minikube/build/build.1903339706
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0906 12:14:58.494827    2157 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-779000 /var/lib/minikube/build/build.1903339706: (1.777290333s)
I0906 12:14:58.495113    2157 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1903339706
I0906 12:14:58.498338    2157 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1903339706.tar
I0906 12:14:58.501104    2157 build_images.go:207] Built localhost/my-image:functional-779000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1903339706.tar
I0906 12:14:58.501118    2157 build_images.go:123] succeeded building to: functional-779000
I0906 12:14:58.501120    2157 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
2023/09/06 12:14:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.463186875s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-779000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-779000 docker-env) && out/minikube-darwin-arm64 status -p functional-779000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-779000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-779000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-779000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-bfvfs" [d41b6ca7-a298-4090-8212-574cc9b7e1c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-bfvfs" [d41b6ca7-a298-4090-8212-574cc9b7e1c0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.01940525s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr: (2.192491958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr: (1.51741325s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.59904825s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-779000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-779000 image load --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr: (1.9978605s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image save gcr.io/google-containers/addon-resizer:functional-779000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image rm gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-779000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 image save --daemon gcr.io/google-containers/addon-resizer:functional-779000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-779000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-779000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [32af1c98-92d3-49a7-8123-8c31fc6dfe59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [32af1c98-92d3-49a7-8123-8c31fc6dfe59] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.0061095s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service list -o json
functional_test.go:1493: Took "95.778833ms" to run "out/minikube-darwin-arm64 -p functional-779000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31191
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31191
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-779000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.159.86 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-779000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "119.600417ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.678208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "124.096291ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "34.352208ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694027678816282000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694027678816282000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694027678816282000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001/test-1694027678816282000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (58.609125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_118bbc3371111e9a3200d56130cc04b9c0f8936a_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 19:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 19:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 19:14 test-1694027678816282000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh cat /mount-9p/test-1694027678816282000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-779000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [97723327-f70c-4e49-86ef-5c912e219056] Pending
helpers_test.go:344: "busybox-mount" [97723327-f70c-4e49-86ef-5c912e219056] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [97723327-f70c-4e49-86ef-5c912e219056] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [97723327-f70c-4e49-86ef-5c912e219056] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008341541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-779000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2835954442/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2176480214/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (58.220167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17116-1006/.minikube/machines/functional-779000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_bf34daac867adf2d983819dd8dbd54b907f3b2f8_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2176480214/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "sudo umount -f /mount-9p": exit status 1 (65.347708ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-779000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2176480214/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-779000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-779000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-779000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-147000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-147000 --driver=qemu2 : (30.533391084s)
--- PASS: TestImageBuild/serial/Setup (30.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-147000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-147000: (1.026127333s)
--- PASS: TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-147000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-147000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (63.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-192000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-192000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m3.120173791s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (63.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons enable ingress --alsologtostderr -v=5: (14.820198417s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-192000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-900000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-900000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (44.5824035s)
--- PASS: TestJSONOutput/start/Command (44.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-900000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-900000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-900000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-900000 --output=json --user=testUser: (12.076042375s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-464000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-464000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.071ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"96b46dae-4ebd-401b-995c-6283c3382a23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-464000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a705dd85-5650-479d-9d57-39b84865cc96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17116"}}
	{"specversion":"1.0","id":"e001173a-6cdf-4af4-8b86-287d0f37a531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig"}}
	{"specversion":"1.0","id":"a1095493-6ff0-430b-82c5-76ff52ddddee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d186f40c-f57e-417d-b4b9-bc067e68c8ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14ec06f5-ee05-4390-b754-c0b5cc523d2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube"}}
	{"specversion":"1.0","id":"acb77c59-dfa7-4f50-80ee-198af9bbfc38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9f5121b-0e23-4019-bdc8-50bf20f0f899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-464000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-464000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-295000 --driver=qemu2 
E0906 12:19:00.114750    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.123172    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.135342    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.157465    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.199594    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.281681    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.443741    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:00.765781    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:01.407837    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:02.688029    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:05.250129    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:10.372228    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-295000 --driver=qemu2 : (29.580983s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-296000 --driver=qemu2 
E0906 12:19:20.614434    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
E0906 12:19:41.096476    1421 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17116-1006/.minikube/profiles/functional-779000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-296000 --driver=qemu2 : (30.936543958s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-295000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-296000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-296000
helpers_test.go:175: Cleaning up "first-295000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-295000
--- PASS: TestMinikubeProfile (61.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-305000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.629083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-305000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17116
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17116-1006/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17116-1006/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-305000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-305000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.349625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-305000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-305000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-305000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-305000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.353875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-305000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-694000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-694000 -n old-k8s-version-694000: exit status 7 (29.929542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-694000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-516000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-516000 -n no-preload-516000: exit status 7 (27.995917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-516000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-293000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-293000 -n embed-certs-293000: exit status 7 (28.68625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-293000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-649000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-649000 -n default-k8s-diff-port-649000: exit status 7 (27.739625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-649000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-401000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-401000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (28.8965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-401000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1: exit status 1 (75.217666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (64.797458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (63.903125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (64.962917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (68.161125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (64.137208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-779000 ssh "findmnt -T" /mount2: exit status 1 (62.102333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-779000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2094269783/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.65s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-330000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-330000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-330000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-330000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-330000"

                                                
                                                
----------------------- debugLogs end: cilium-330000 [took: 2.120727125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-330000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-330000
--- SKIP: TestNetworkPlugins/group/cilium (2.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-931000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-931000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard