Test Report: QEMU_macOS 17011

                    
                      4d909ae33ff265fc050ea07aeaa703b9386ea7a9:2023-08-09:30510
                    
                

Test fail (87/250)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.09
7 TestDownloadOnly/v1.16.0/kubectl 0
27 TestOffline 9.88
29 TestAddons/Setup 44.55
30 TestCertOptions 10.08
31 TestCertExpiration 195.26
32 TestDockerFlags 10.08
33 TestForceSystemdFlag 11.29
34 TestForceSystemdEnv 10.13
79 TestFunctional/parallel/ServiceCmdConnect 34.6
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
146 TestImageBuild/serial/BuildWithBuildArg 1.07
155 TestIngressAddonLegacy/serial/ValidateIngressAddons 55.04
190 TestMountStart/serial/StartWithMountFirst 10.25
193 TestMultiNode/serial/FreshStart2Nodes 10.52
194 TestMultiNode/serial/DeployApp2Nodes 105.85
195 TestMultiNode/serial/PingHostFrom2Pods 0.08
196 TestMultiNode/serial/AddNode 0.07
197 TestMultiNode/serial/ProfileList 0.16
198 TestMultiNode/serial/CopyFile 0.06
199 TestMultiNode/serial/StopNode 0.13
200 TestMultiNode/serial/StartAfterStop 0.1
201 TestMultiNode/serial/RestartKeepsNodes 5.37
202 TestMultiNode/serial/DeleteNode 0.09
203 TestMultiNode/serial/StopMultiNode 0.14
204 TestMultiNode/serial/RestartMultiNode 5.25
205 TestMultiNode/serial/ValidateNameConflict 20.14
209 TestPreload 9.9
211 TestScheduledStopUnix 9.87
212 TestSkaffold 11.91
215 TestRunningBinaryUpgrade 128.13
217 TestKubernetesUpgrade 15.28
230 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.76
231 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.34
232 TestStoppedBinaryUpgrade/Setup 174.28
234 TestPause/serial/Start 9.86
244 TestNoKubernetes/serial/StartWithK8s 9.84
245 TestNoKubernetes/serial/StartWithStopK8s 5.36
246 TestNoKubernetes/serial/Start 5.3
250 TestNoKubernetes/serial/StartNoArgs 5.3
252 TestNetworkPlugins/group/auto/Start 9.9
253 TestNetworkPlugins/group/kindnet/Start 9.87
254 TestNetworkPlugins/group/calico/Start 9.88
255 TestNetworkPlugins/group/custom-flannel/Start 9.8
256 TestNetworkPlugins/group/false/Start 9.8
257 TestNetworkPlugins/group/enable-default-cni/Start 9.75
258 TestNetworkPlugins/group/flannel/Start 9.85
259 TestNetworkPlugins/group/bridge/Start 9.99
260 TestNetworkPlugins/group/kubenet/Start 9.87
261 TestStoppedBinaryUpgrade/Upgrade 2.83
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
264 TestStartStop/group/old-k8s-version/serial/FirstStart 11.11
266 TestStartStop/group/no-preload/serial/FirstStart 10.04
267 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
268 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
271 TestStartStop/group/old-k8s-version/serial/SecondStart 7.1
272 TestStartStop/group/no-preload/serial/DeployApp 0.08
273 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.1
276 TestStartStop/group/no-preload/serial/SecondStart 5.19
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.06
280 TestStartStop/group/old-k8s-version/serial/Pause 0.1
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
285 TestStartStop/group/embed-certs/serial/FirstStart 9.86
286 TestStartStop/group/no-preload/serial/Pause 0.12
288 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.4
289 TestStartStop/group/embed-certs/serial/DeployApp 0.1
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
293 TestStartStop/group/embed-certs/serial/SecondStart 6.99
294 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
298 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.2
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.06
302 TestStartStop/group/embed-certs/serial/Pause 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
307 TestStartStop/group/newest-cni/serial/FirstStart 10.17
308 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
313 TestStartStop/group/newest-cni/serial/SecondStart 5.25
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (12.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.085250583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"96ccc0bf-ecef-4d2d-9f86-18dbcc1bda8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-498000] minikube v1.31.1 on Darwin 13.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6badeeed-3a1d-469e-9af2-28b1a9be70f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17011"}}
	{"specversion":"1.0","id":"f8f4267f-49fc-4d64-911f-473432636d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig"}}
	{"specversion":"1.0","id":"fe8b5760-273c-4152-8476-f7f2356c4caa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5344c5c9-63cd-4e2b-950d-c862d99e61aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"65deccaf-37e8-4981-8fee-f4300c8856f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube"}}
	{"specversion":"1.0","id":"b4070b0a-b88e-4650-90eb-cd0dae7a9879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"414c9798-de35-4ee9-933b-74ea5003fe5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"18a2f7f5-2bab-4f20-90b8-b66639bccd50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"cbbf25f6-b804-4d52-9460-b8bb0c923c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f5f5144-3b5e-42ef-b6e1-d1e53133dc20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-498000 in cluster download-only-498000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b05ae53-49b8-40b9-9b57-79c2049b8b0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"16a5cbbf-fd04-49d7-932e-c00c55746439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710] Decompressors:map[bz2:0x14000701a80 gz:0x14000701a88 tar:0x14000701a30 tar.bz2:0x14000701a40 tar.gz:0x14000701a50 tar.xz:0x14000701a60 tar.zst:0x14000701a70 tbz2:0x14000701a40 tgz:0x1400070
1a50 txz:0x14000701a60 tzst:0x14000701a70 xz:0x14000701a90 zip:0x14000701aa0 zst:0x14000701a98] Getters:map[file:0x14000e9c5f0 http:0x140005fc190 https:0x140005fc1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"40e522bc-856b-4f02-ad8f-e418ae2141ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:08:50.930648    1413 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:08:50.930770    1413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:08:50.930773    1413 out.go:309] Setting ErrFile to fd 2...
	I0809 11:08:50.930776    1413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:08:50.930881    1413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	W0809 11:08:50.930935    1413 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: no such file or directory
	I0809 11:08:50.932056    1413 out.go:303] Setting JSON to true
	I0809 11:08:50.948374    1413 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":504,"bootTime":1691604026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:08:50.948428    1413 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:08:50.957136    1413 out.go:97] [download-only-498000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:08:50.960943    1413 out.go:169] MINIKUBE_LOCATION=17011
	I0809 11:08:50.957298    1413 notify.go:220] Checking for updates...
	W0809 11:08:50.957308    1413 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball: no such file or directory
	I0809 11:08:50.970973    1413 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:08:50.974032    1413 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:08:50.977033    1413 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:08:50.979945    1413 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	W0809 11:08:50.986027    1413 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 11:08:50.986249    1413 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:08:50.990905    1413 out.go:97] Using the qemu2 driver based on user configuration
	I0809 11:08:50.990912    1413 start.go:298] selected driver: qemu2
	I0809 11:08:50.990914    1413 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:08:50.990974    1413 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:08:50.994999    1413 out.go:169] Automatically selected the socket_vmnet network
	I0809 11:08:51.001445    1413 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0809 11:08:51.001523    1413 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:08:51.001574    1413 cni.go:84] Creating CNI manager for ""
	I0809 11:08:51.001592    1413 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:08:51.001596    1413 start_flags.go:319] config:
	{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-498000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:08:51.007207    1413 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:08:51.011018    1413 out.go:97] Downloading VM boot image ...
	I0809 11:08:51.011035    1413 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso
	I0809 11:08:56.279215    1413 out.go:97] Starting control plane node download-only-498000 in cluster download-only-498000
	I0809 11:08:56.279249    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:08:56.338650    1413 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:08:56.338669    1413 cache.go:57] Caching tarball of preloaded images
	I0809 11:08:56.338831    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:08:56.342847    1413 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0809 11:08:56.342853    1413 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:08:56.422163    1413 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:09:01.832206    1413 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:01.832347    1413 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:02.471345    1413 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0809 11:09:02.471536    1413 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json ...
	I0809 11:09:02.471555    1413 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json: {Name:mk2bde276129fa60a0acedd1cd1f332b26f05753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:02.471783    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:09:02.471953    1413 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0809 11:09:02.948281    1413 out.go:169] 
	W0809 11:09:02.952455    1413 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710] Decompressors:map[bz2:0x14000701a80 gz:0x14000701a88 tar:0x14000701a30 tar.bz2:0x14000701a40 tar.gz:0x14000701a50 tar.xz:0x14000701a60 tar.zst:0x14000701a70 tbz2:0x14000701a40 tgz:0x14000701a50 txz:0x14000701a60 tzst:0x14000701a70 xz:0x14000701a90 zip:0x14000701aa0 zst:0x14000701a98] Getters:map[file:0x14000e9c5f0 http:0x140005fc190 https:0x140005fc1e0] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0809 11:09:02.952482    1413 out_reason.go:110] 
	W0809 11:09:02.959488    1413 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:09:02.963359    1413 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-498000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (12.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-265000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-265000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.744041417s)

                                                
                                                
-- stdout --
	* [offline-docker-265000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-265000 in cluster offline-docker-265000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:22:47.427669    2996 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:22:47.427815    2996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:47.427818    2996 out.go:309] Setting ErrFile to fd 2...
	I0809 11:22:47.427820    2996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:47.427941    2996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:22:47.429074    2996 out.go:303] Setting JSON to false
	I0809 11:22:47.445805    2996 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1341,"bootTime":1691604026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:22:47.445890    2996 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:22:47.450673    2996 out.go:177] * [offline-docker-265000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:22:47.458450    2996 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:22:47.462521    2996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:22:47.458481    2996 notify.go:220] Checking for updates...
	I0809 11:22:47.468564    2996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:22:47.471640    2996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:22:47.474572    2996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:22:47.477488    2996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:22:47.480832    2996 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:22:47.480877    2996 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:22:47.484527    2996 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:22:47.491596    2996 start.go:298] selected driver: qemu2
	I0809 11:22:47.491602    2996 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:22:47.491610    2996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:22:47.493407    2996 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:22:47.496542    2996 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:22:47.500319    2996 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:22:47.500501    2996 cni.go:84] Creating CNI manager for ""
	I0809 11:22:47.500524    2996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:22:47.500536    2996 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:22:47.500547    2996 start_flags.go:319] config:
	{Name:offline-docker-265000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:offline-docker-265000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0809 11:22:47.505045    2996 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:47.513381    2996 out.go:177] * Starting control plane node offline-docker-265000 in cluster offline-docker-265000
	I0809 11:22:47.517490    2996 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:22:47.517522    2996 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:22:47.517539    2996 cache.go:57] Caching tarball of preloaded images
	I0809 11:22:47.517603    2996 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:22:47.517607    2996 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:22:47.517673    2996 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/offline-docker-265000/config.json ...
	I0809 11:22:47.517684    2996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/offline-docker-265000/config.json: {Name:mkcd42b4d95b9f17fd8a2c9ac72bd3c073d944d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:22:47.517884    2996 start.go:365] acquiring machines lock for offline-docker-265000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:22:47.517916    2996 start.go:369] acquired machines lock for "offline-docker-265000" in 22.25µs
	I0809 11:22:47.517927    2996 start.go:93] Provisioning new machine with config: &{Name:offline-docker-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.4 ClusterName:offline-docker-265000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:22:47.517966    2996 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:22:47.521523    2996 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:22:47.535707    2996 start.go:159] libmachine.API.Create for "offline-docker-265000" (driver="qemu2")
	I0809 11:22:47.535733    2996 client.go:168] LocalClient.Create starting
	I0809 11:22:47.535791    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:22:47.535818    2996 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:47.535832    2996 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:47.535875    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:22:47.535893    2996 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:47.535903    2996 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:47.536222    2996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:22:47.657434    2996 main.go:141] libmachine: Creating SSH key...
	I0809 11:22:47.739405    2996 main.go:141] libmachine: Creating Disk image...
	I0809 11:22:47.739414    2996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:22:47.739577    2996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:47.748414    2996 main.go:141] libmachine: STDOUT: 
	I0809 11:22:47.748428    2996 main.go:141] libmachine: STDERR: 
	I0809 11:22:47.748498    2996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2 +20000M
	I0809 11:22:47.756705    2996 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:22:47.756730    2996 main.go:141] libmachine: STDERR: 
	I0809 11:22:47.756750    2996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:47.756757    2996 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:22:47.756797    2996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:25:30:f8:3b:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:47.758542    2996 main.go:141] libmachine: STDOUT: 
	I0809 11:22:47.758553    2996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:22:47.758570    2996 client.go:171] LocalClient.Create took 222.598625ms
	I0809 11:22:49.762667    2996 start.go:128] duration metric: createHost completed in 2.242445792s
	I0809 11:22:49.762721    2996 start.go:83] releasing machines lock for "offline-docker-265000", held for 2.242567792s
	W0809 11:22:49.762745    2996 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:49.772741    2996 out.go:177] * Deleting "offline-docker-265000" in qemu2 ...
	W0809 11:22:49.783956    2996 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:49.783965    2996 start.go:687] Will try again in 5 seconds ...
	I0809 11:22:54.790105    2996 start.go:365] acquiring machines lock for offline-docker-265000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:22:54.790467    2996 start.go:369] acquired machines lock for "offline-docker-265000" in 282.875µs
	I0809 11:22:54.790577    2996 start.go:93] Provisioning new machine with config: &{Name:offline-docker-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.4 ClusterName:offline-docker-265000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:22:54.790884    2996 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:22:54.800435    2996 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:22:54.847443    2996 start.go:159] libmachine.API.Create for "offline-docker-265000" (driver="qemu2")
	I0809 11:22:54.847475    2996 client.go:168] LocalClient.Create starting
	I0809 11:22:54.847594    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:22:54.847656    2996 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:54.847677    2996 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:54.847757    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:22:54.847793    2996 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:54.847807    2996 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:54.848321    2996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:22:54.971858    2996 main.go:141] libmachine: Creating SSH key...
	I0809 11:22:55.095808    2996 main.go:141] libmachine: Creating Disk image...
	I0809 11:22:55.095814    2996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:22:55.095965    2996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:55.104439    2996 main.go:141] libmachine: STDOUT: 
	I0809 11:22:55.104453    2996 main.go:141] libmachine: STDERR: 
	I0809 11:22:55.104524    2996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2 +20000M
	I0809 11:22:55.111698    2996 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:22:55.111710    2996 main.go:141] libmachine: STDERR: 
	I0809 11:22:55.111721    2996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:55.111740    2996 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:22:55.111782    2996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:4c:12:bc:b9:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/offline-docker-265000/disk.qcow2
	I0809 11:22:55.113311    2996 main.go:141] libmachine: STDOUT: 
	I0809 11:22:55.113323    2996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:22:55.113336    2996 client.go:171] LocalClient.Create took 265.682625ms
	I0809 11:22:57.116664    2996 start.go:128] duration metric: createHost completed in 2.324335875s
	I0809 11:22:57.116709    2996 start.go:83] releasing machines lock for "offline-docker-265000", held for 2.324795333s
	W0809 11:22:57.116911    2996 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:57.125152    2996 out.go:177] 
	W0809 11:22:57.129170    2996 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:22:57.129185    2996 out.go:239] * 
	* 
	W0809 11:22:57.130915    2996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:22:57.141185    2996 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-265000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-08-09 11:22:57.153827 -0700 PDT m=+846.301611585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-265000 -n offline-docker-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-265000 -n offline-docker-265000: exit status 7 (40.393459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-265000
--- FAIL: TestOffline (9.88s)

                                                
                                    
x
+
TestAddons/Setup (44.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-598000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-598000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.546967042s)

                                                
                                                
-- stdout --
	* [addons-598000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-598000 in cluster addons-598000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.8
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying csi-hostpath-driver addon...
	* Verifying registry addon...
	* Verifying Kubernetes components...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:09:34.390559    1487 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:09:34.390680    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:34.390683    1487 out.go:309] Setting ErrFile to fd 2...
	I0809 11:09:34.390685    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:34.390801    1487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:09:34.391833    1487 out.go:303] Setting JSON to false
	I0809 11:09:34.406925    1487 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":548,"bootTime":1691604026,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:09:34.406992    1487 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:09:34.411939    1487 out.go:177] * [addons-598000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:09:34.414970    1487 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:09:34.415024    1487 notify.go:220] Checking for updates...
	I0809 11:09:34.418918    1487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:09:34.422932    1487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:09:34.426887    1487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:09:34.429949    1487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:09:34.432892    1487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:09:34.436082    1487 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:09:34.439920    1487 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:09:34.446971    1487 start.go:298] selected driver: qemu2
	I0809 11:09:34.446978    1487 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:09:34.446984    1487 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:09:34.448908    1487 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:09:34.451943    1487 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:09:34.454996    1487 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:09:34.455014    1487 cni.go:84] Creating CNI manager for ""
	I0809 11:09:34.455021    1487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:09:34.455026    1487 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:09:34.455031    1487 start_flags.go:319] config:
	{Name:addons-598000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0809 11:09:34.459222    1487 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:09:34.465962    1487 out.go:177] * Starting control plane node addons-598000 in cluster addons-598000
	I0809 11:09:34.469885    1487 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:09:34.469905    1487 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:09:34.469917    1487 cache.go:57] Caching tarball of preloaded images
	I0809 11:09:34.469979    1487 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:09:34.469984    1487 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:09:34.470145    1487 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/config.json ...
	I0809 11:09:34.470156    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/config.json: {Name:mk2bf9665549cf116849f1a9d8ba4685439b2daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:34.470338    1487 start.go:365] acquiring machines lock for addons-598000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:09:34.470400    1487 start.go:369] acquired machines lock for "addons-598000" in 56.417µs
	I0809 11:09:34.470409    1487 start.go:93] Provisioning new machine with config: &{Name:addons-598000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:addons-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:09:34.470439    1487 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:09:34.473995    1487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0809 11:09:34.811502    1487 start.go:159] libmachine.API.Create for "addons-598000" (driver="qemu2")
	I0809 11:09:34.811535    1487 client.go:168] LocalClient.Create starting
	I0809 11:09:34.811694    1487 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:09:34.881857    1487 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:09:35.058561    1487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:09:35.566841    1487 main.go:141] libmachine: Creating SSH key...
	I0809 11:09:35.868924    1487 main.go:141] libmachine: Creating Disk image...
	I0809 11:09:35.868935    1487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:09:35.869191    1487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2
	I0809 11:09:35.905033    1487 main.go:141] libmachine: STDOUT: 
	I0809 11:09:35.905061    1487 main.go:141] libmachine: STDERR: 
	I0809 11:09:35.905140    1487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2 +20000M
	I0809 11:09:35.912603    1487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:09:35.912618    1487 main.go:141] libmachine: STDERR: 
	I0809 11:09:35.912639    1487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2
	I0809 11:09:35.912648    1487 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:09:35.912704    1487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:6b:2e:9d:a1:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/disk.qcow2
	I0809 11:09:35.977845    1487 main.go:141] libmachine: STDOUT: 
	I0809 11:09:35.977925    1487 main.go:141] libmachine: STDERR: 
	I0809 11:09:35.977933    1487 main.go:141] libmachine: Attempt 0
	I0809 11:09:35.977955    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:37.980067    1487 main.go:141] libmachine: Attempt 1
	I0809 11:09:37.980153    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:39.982294    1487 main.go:141] libmachine: Attempt 2
	I0809 11:09:39.982323    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:41.984364    1487 main.go:141] libmachine: Attempt 3
	I0809 11:09:41.984376    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:43.986376    1487 main.go:141] libmachine: Attempt 4
	I0809 11:09:43.986402    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:45.988500    1487 main.go:141] libmachine: Attempt 5
	I0809 11:09:45.988556    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:47.988687    1487 main.go:141] libmachine: Attempt 6
	I0809 11:09:47.988712    1487 main.go:141] libmachine: Searching for 7e:6b:2e:9d:a1:90 in /var/db/dhcpd_leases ...
	I0809 11:09:47.988856    1487 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0809 11:09:47.988891    1487 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:09:47.988904    1487 main.go:141] libmachine: Found match: 7e:6b:2e:9d:a1:90
	I0809 11:09:47.988915    1487 main.go:141] libmachine: IP: 192.168.105.2
	I0809 11:09:47.988921    1487 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0809 11:09:50.009032    1487 machine.go:88] provisioning docker machine ...
	I0809 11:09:50.009091    1487 buildroot.go:166] provisioning hostname "addons-598000"
	I0809 11:09:50.010571    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.011351    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.011369    1487 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-598000 && echo "addons-598000" | sudo tee /etc/hostname
	I0809 11:09:50.086510    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-598000
	
	I0809 11:09:50.086626    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.087033    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.087051    1487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-598000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-598000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-598000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 11:09:50.150430    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 11:09:50.150444    1487 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17011-995/.minikube CaCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17011-995/.minikube}
	I0809 11:09:50.150454    1487 buildroot.go:174] setting up certificates
	I0809 11:09:50.150461    1487 provision.go:83] configureAuth start
	I0809 11:09:50.150466    1487 provision.go:138] copyHostCerts
	I0809 11:09:50.150592    1487 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem (1679 bytes)
	I0809 11:09:50.150907    1487 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem (1082 bytes)
	I0809 11:09:50.151056    1487 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem (1123 bytes)
	I0809 11:09:50.151179    1487 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem org=jenkins.addons-598000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-598000]
	I0809 11:09:50.201296    1487 provision.go:172] copyRemoteCerts
	I0809 11:09:50.201344    1487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 11:09:50.201365    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:09:50.228473    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0809 11:09:50.235631    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 11:09:50.242649    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 11:09:50.249319    1487 provision.go:86] duration metric: configureAuth took 98.856583ms
	I0809 11:09:50.249327    1487 buildroot.go:189] setting minikube options for container-runtime
	I0809 11:09:50.249437    1487 config.go:182] Loaded profile config "addons-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:09:50.249477    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.249702    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.249711    1487 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0809 11:09:50.302801    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0809 11:09:50.302807    1487 buildroot.go:70] root file system type: tmpfs
	I0809 11:09:50.302870    1487 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0809 11:09:50.302913    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.303157    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.303192    1487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0809 11:09:50.358348    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0809 11:09:50.358396    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.358625    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.358634    1487 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0809 11:09:50.704451    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0809 11:09:50.704464    1487 machine.go:91] provisioned docker machine in 695.43075ms
	I0809 11:09:50.704469    1487 client.go:171] LocalClient.Create took 15.893471417s
	I0809 11:09:50.704491    1487 start.go:167] duration metric: libmachine.API.Create for "addons-598000" took 15.8935395s
	I0809 11:09:50.704495    1487 start.go:300] post-start starting for "addons-598000" (driver="qemu2")
	I0809 11:09:50.704500    1487 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 11:09:50.704563    1487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 11:09:50.704572    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:09:50.732924    1487 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 11:09:50.734314    1487 info.go:137] Remote host: Buildroot 2021.02.12
	I0809 11:09:50.734320    1487 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/addons for local assets ...
	I0809 11:09:50.734381    1487 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/files for local assets ...
	I0809 11:09:50.734406    1487 start.go:303] post-start completed in 29.909583ms
	I0809 11:09:50.734748    1487 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/config.json ...
	I0809 11:09:50.734893    1487 start.go:128] duration metric: createHost completed in 16.265005s
	I0809 11:09:50.734929    1487 main.go:141] libmachine: Using SSH client type: native
	I0809 11:09:50.735142    1487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104aad590] 0x104aafff0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0809 11:09:50.735146    1487 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0809 11:09:50.785986    1487 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691604590.418146627
	
	I0809 11:09:50.785993    1487 fix.go:206] guest clock: 1691604590.418146627
	I0809 11:09:50.785997    1487 fix.go:219] Guest: 2023-08-09 11:09:50.418146627 -0700 PDT Remote: 2023-08-09 11:09:50.734897 -0700 PDT m=+16.363094751 (delta=-316.750373ms)
	I0809 11:09:50.786007    1487 fix.go:190] guest clock delta is within tolerance: -316.750373ms
	I0809 11:09:50.786010    1487 start.go:83] releasing machines lock for "addons-598000", held for 16.316160792s
	I0809 11:09:50.786311    1487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 11:09:50.786311    1487 ssh_runner.go:195] Run: cat /version.json
	I0809 11:09:50.786386    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:09:50.786351    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:09:50.856586    1487 ssh_runner.go:195] Run: systemctl --version
	I0809 11:09:50.858721    1487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0809 11:09:50.860364    1487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0809 11:09:50.860398    1487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 11:09:50.866526    1487 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0809 11:09:50.866534    1487 start.go:466] detecting cgroup driver to use...
	I0809 11:09:50.866637    1487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:09:50.872234    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0809 11:09:50.875612    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0809 11:09:50.878657    1487 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0809 11:09:50.878681    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0809 11:09:50.881462    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:09:50.884790    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0809 11:09:50.888150    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:09:50.891402    1487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 11:09:50.894257    1487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0809 11:09:50.897293    1487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 11:09:50.900374    1487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 11:09:50.903100    1487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:09:50.976106    1487 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0809 11:09:50.984418    1487 start.go:466] detecting cgroup driver to use...
	I0809 11:09:50.984487    1487 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0809 11:09:50.990927    1487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:09:50.997104    1487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 11:09:51.003583    1487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:09:51.008227    1487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:09:51.012626    1487 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0809 11:09:51.032615    1487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:09:51.037240    1487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:09:51.042686    1487 ssh_runner.go:195] Run: which cri-dockerd
	I0809 11:09:51.043878    1487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0809 11:09:51.046609    1487 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0809 11:09:51.051510    1487 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0809 11:09:51.119297    1487 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0809 11:09:51.199832    1487 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0809 11:09:51.199845    1487 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0809 11:09:51.205304    1487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:09:51.288387    1487 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:09:52.447200    1487 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158837708s)
	I0809 11:09:52.447265    1487 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:09:52.523767    1487 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0809 11:09:52.596363    1487 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:09:52.685307    1487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:09:52.768701    1487 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0809 11:09:52.776951    1487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:09:52.863709    1487 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0809 11:09:52.886315    1487 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0809 11:09:52.886393    1487 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0809 11:09:52.889457    1487 start.go:534] Will wait 60s for crictl version
	I0809 11:09:52.889517    1487 ssh_runner.go:195] Run: which crictl
	I0809 11:09:52.891086    1487 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 11:09:52.905296    1487 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0809 11:09:52.905368    1487 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:09:52.914939    1487 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:09:52.930592    1487 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0809 11:09:52.930731    1487 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0809 11:09:52.932086    1487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:09:52.935998    1487 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:09:52.936042    1487 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:09:52.941398    1487 docker.go:636] Got preloaded images: 
	I0809 11:09:52.941405    1487 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0809 11:09:52.941444    1487 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:09:52.944784    1487 ssh_runner.go:195] Run: which lz4
	I0809 11:09:52.946304    1487 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0809 11:09:52.947554    1487 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0809 11:09:52.947568    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0809 11:09:54.218657    1487 docker.go:600] Took 1.272448 seconds to copy over tarball
	I0809 11:09:54.218713    1487 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0809 11:09:55.258655    1487 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.039964709s)
	I0809 11:09:55.258667    1487 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0809 11:09:55.273578    1487 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:09:55.276980    1487 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0809 11:09:55.282080    1487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:09:55.360262    1487 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:09:57.484279    1487 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.124075958s)
	I0809 11:09:57.484379    1487 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:09:57.490279    1487 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0809 11:09:57.490288    1487 cache_images.go:84] Images are preloaded, skipping loading
	I0809 11:09:57.490358    1487 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0809 11:09:57.498200    1487 cni.go:84] Creating CNI manager for ""
	I0809 11:09:57.498210    1487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:09:57.498239    1487 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 11:09:57.498249    1487 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-598000 NodeName:addons-598000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 11:09:57.498324    1487 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-598000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 11:09:57.498354    1487 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-598000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 11:09:57.498414    1487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 11:09:57.501303    1487 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 11:09:57.501329    1487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 11:09:57.504590    1487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0809 11:09:57.509739    1487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 11:09:57.514566    1487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0809 11:09:57.519639    1487 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0809 11:09:57.520951    1487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:09:57.524869    1487 certs.go:56] Setting up /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000 for IP: 192.168.105.2
	I0809 11:09:57.524880    1487 certs.go:190] acquiring lock for shared ca certs: {Name:mkc408918270161d0a558be6b69aedd9ebd20eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.525038    1487 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key
	I0809 11:09:57.600455    1487 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt ...
	I0809 11:09:57.600460    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt: {Name:mk2b760f2e573e689e12d22d8d5bdefa1623c3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.600669    1487 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key ...
	I0809 11:09:57.600673    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key: {Name:mk569314718f392230868144e272d239727be7d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.600792    1487 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key
	I0809 11:09:57.660601    1487 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt ...
	I0809 11:09:57.660604    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt: {Name:mk3d774cd79d2324066c71fc61b2a1b781339282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.660730    1487 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key ...
	I0809 11:09:57.660733    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key: {Name:mk80d837779bf35abd49039365cf30d87d4cbfab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.660863    1487 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.key
	I0809 11:09:57.660881    1487 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.crt with IP's: []
	I0809 11:09:57.891297    1487 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.crt ...
	I0809 11:09:57.891306    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.crt: {Name:mk9918c48c1e49bffe9faf0885de8ffec1d2b2ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.891554    1487 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.key ...
	I0809 11:09:57.891560    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/client.key: {Name:mkafde5fb91e4b2ffcd03d7ece7983fcc3e341df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:57.891658    1487 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key.96055969
	I0809 11:09:57.891671    1487 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 11:09:58.000046    1487 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt.96055969 ...
	I0809 11:09:58.000050    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt.96055969: {Name:mkaa55e6085b8cafafdede49f3380b51a5cc2f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:58.000205    1487 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key.96055969 ...
	I0809 11:09:58.000209    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key.96055969: {Name:mk80d503096f1482a47c96bb08295b11f136a498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:58.000389    1487 certs.go:337] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt
	I0809 11:09:58.000497    1487 certs.go:341] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key
	I0809 11:09:58.000579    1487 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.key
	I0809 11:09:58.000588    1487 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.crt with IP's: []
	I0809 11:09:58.249336    1487 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.crt ...
	I0809 11:09:58.249345    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.crt: {Name:mk738a2c7e4eee32ffa04e0915186a96b53d1385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:58.249580    1487 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.key ...
	I0809 11:09:58.249583    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.key: {Name:mked4a7cb1e737882c052dc64642f75f96de3f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:58.249830    1487 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem (1679 bytes)
	I0809 11:09:58.249855    1487 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem (1082 bytes)
	I0809 11:09:58.249873    1487 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem (1123 bytes)
	I0809 11:09:58.249891    1487 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem (1679 bytes)
	I0809 11:09:58.250204    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 11:09:58.257661    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0809 11:09:58.264728    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 11:09:58.272179    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/addons-598000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0809 11:09:58.279233    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 11:09:58.285884    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0809 11:09:58.292640    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 11:09:58.300033    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0809 11:09:58.307744    1487 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 11:09:58.315055    1487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 11:09:58.321117    1487 ssh_runner.go:195] Run: openssl version
	I0809 11:09:58.323184    1487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 11:09:58.326220    1487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:09:58.327802    1487 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:09:58.327823    1487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:09:58.329815    1487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 11:09:58.333074    1487 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 11:09:58.334787    1487 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 11:09:58.334823    1487 kubeadm.go:404] StartCluster: {Name:addons-598000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.4 ClusterName:addons-598000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:09:58.334896    1487 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0809 11:09:58.340592    1487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 11:09:58.343953    1487 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 11:09:58.346768    1487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 11:09:58.349435    1487 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 11:09:58.349447    1487 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0809 11:09:58.371758    1487 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0809 11:09:58.371785    1487 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 11:09:58.424209    1487 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 11:09:58.424262    1487 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 11:09:58.424313    1487 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 11:09:58.482496    1487 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 11:09:58.489668    1487 out.go:204]   - Generating certificates and keys ...
	I0809 11:09:58.489705    1487 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 11:09:58.489734    1487 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 11:09:58.682241    1487 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 11:09:58.775670    1487 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 11:09:58.871031    1487 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 11:09:58.960454    1487 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 11:09:59.040842    1487 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 11:09:59.040898    1487 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-598000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0809 11:09:59.270962    1487 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 11:09:59.271027    1487 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-598000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0809 11:09:59.590691    1487 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 11:09:59.665330    1487 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 11:09:59.725988    1487 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 11:09:59.726018    1487 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 11:09:59.806014    1487 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 11:09:59.910591    1487 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 11:09:59.990285    1487 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 11:10:00.031843    1487 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 11:10:00.038422    1487 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 11:10:00.038470    1487 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 11:10:00.038493    1487 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 11:10:00.129868    1487 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 11:10:00.136011    1487 out.go:204]   - Booting up control plane ...
	I0809 11:10:00.136083    1487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 11:10:00.136118    1487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 11:10:00.136146    1487 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 11:10:00.136199    1487 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 11:10:00.136300    1487 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 11:10:04.646093    1487 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.510146 seconds
	I0809 11:10:04.646261    1487 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 11:10:04.664277    1487 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 11:10:05.182174    1487 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 11:10:05.182282    1487 kubeadm.go:322] [mark-control-plane] Marking the node addons-598000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0809 11:10:05.704711    1487 kubeadm.go:322] [bootstrap-token] Using token: g9tfu1.f9woh0kqovavo3vg
	I0809 11:10:05.715789    1487 out.go:204]   - Configuring RBAC rules ...
	I0809 11:10:05.715877    1487 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 11:10:05.717989    1487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 11:10:05.724030    1487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 11:10:05.727370    1487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 11:10:05.729665    1487 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 11:10:05.730597    1487 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 11:10:05.737730    1487 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 11:10:05.911436    1487 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 11:10:06.119648    1487 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 11:10:06.120255    1487 kubeadm.go:322] 
	I0809 11:10:06.120291    1487 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 11:10:06.120297    1487 kubeadm.go:322] 
	I0809 11:10:06.120365    1487 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 11:10:06.120375    1487 kubeadm.go:322] 
	I0809 11:10:06.120391    1487 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 11:10:06.120445    1487 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 11:10:06.120471    1487 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 11:10:06.120474    1487 kubeadm.go:322] 
	I0809 11:10:06.120515    1487 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0809 11:10:06.120519    1487 kubeadm.go:322] 
	I0809 11:10:06.120544    1487 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0809 11:10:06.120546    1487 kubeadm.go:322] 
	I0809 11:10:06.120570    1487 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 11:10:06.120618    1487 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 11:10:06.120714    1487 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 11:10:06.120721    1487 kubeadm.go:322] 
	I0809 11:10:06.120756    1487 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 11:10:06.120789    1487 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 11:10:06.120792    1487 kubeadm.go:322] 
	I0809 11:10:06.120890    1487 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g9tfu1.f9woh0kqovavo3vg \
	I0809 11:10:06.120949    1487 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 \
	I0809 11:10:06.120961    1487 kubeadm.go:322] 	--control-plane 
	I0809 11:10:06.120966    1487 kubeadm.go:322] 
	I0809 11:10:06.121004    1487 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 11:10:06.121006    1487 kubeadm.go:322] 
	I0809 11:10:06.121054    1487 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g9tfu1.f9woh0kqovavo3vg \
	I0809 11:10:06.121120    1487 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 
	I0809 11:10:06.121180    1487 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 11:10:06.121187    1487 cni.go:84] Creating CNI manager for ""
	I0809 11:10:06.121195    1487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:10:06.127009    1487 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0809 11:10:06.131103    1487 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0809 11:10:06.134165    1487 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0809 11:10:06.138838    1487 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 11:10:06.138882    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:06.138893    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=addons-598000 minikube.k8s.io/updated_at=2023_08_09T11_10_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:06.206056    1487 ops.go:34] apiserver oom_adj: -16
	I0809 11:10:06.206104    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:06.237279    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:06.770704    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:07.270746    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:07.770739    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:08.270631    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:08.770832    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:09.270780    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:09.770871    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:10.270850    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:10.770745    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:11.270752    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:11.770675    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:12.270804    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:12.770686    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:13.270436    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:13.770719    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:14.269828    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:14.770656    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:15.270378    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:15.770342    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:16.270356    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:16.770382    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:17.270364    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:17.770250    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:18.270296    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:18.770291    1487 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:10:18.803834    1487 kubeadm.go:1081] duration metric: took 12.665416417s to wait for elevateKubeSystemPrivileges.
	I0809 11:10:18.803851    1487 kubeadm.go:406] StartCluster complete in 20.469723333s
	I0809 11:10:18.803861    1487 settings.go:142] acquiring lock: {Name:mkccab662ae5271e860bc4bdf3048d54a609848d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:10:18.804010    1487 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:10:18.804269    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/kubeconfig: {Name:mk08b0de0097dc34716acdd012f0f4571979d434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:10:18.804487    1487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 11:10:18.804593    1487 config.go:182] Loaded profile config "addons-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:10:18.804577    1487 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0809 11:10:18.804656    1487 addons.go:69] Setting ingress=true in profile "addons-598000"
	I0809 11:10:18.804664    1487 addons.go:231] Setting addon ingress=true in "addons-598000"
	I0809 11:10:18.804677    1487 addons.go:69] Setting volumesnapshots=true in profile "addons-598000"
	I0809 11:10:18.804680    1487 addons.go:69] Setting metrics-server=true in profile "addons-598000"
	I0809 11:10:18.804684    1487 addons.go:231] Setting addon volumesnapshots=true in "addons-598000"
	I0809 11:10:18.804686    1487 addons.go:231] Setting addon metrics-server=true in "addons-598000"
	I0809 11:10:18.804707    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804709    1487 addons.go:69] Setting storage-provisioner=true in profile "addons-598000"
	I0809 11:10:18.804711    1487 addons.go:69] Setting inspektor-gadget=true in profile "addons-598000"
	I0809 11:10:18.804714    1487 addons.go:231] Setting addon storage-provisioner=true in "addons-598000"
	I0809 11:10:18.804716    1487 addons.go:231] Setting addon inspektor-gadget=true in "addons-598000"
	I0809 11:10:18.804727    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804732    1487 addons.go:69] Setting default-storageclass=true in profile "addons-598000"
	I0809 11:10:18.804737    1487 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-598000"
	I0809 11:10:18.804732    1487 addons.go:69] Setting registry=true in profile "addons-598000"
	I0809 11:10:18.804766    1487 addons.go:231] Setting addon registry=true in "addons-598000"
	I0809 11:10:18.804731    1487 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-598000"
	I0809 11:10:18.804767    1487 addons.go:69] Setting cloud-spanner=true in profile "addons-598000"
	I0809 11:10:18.804806    1487 addons.go:231] Setting addon cloud-spanner=true in "addons-598000"
	I0809 11:10:18.804818    1487 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-598000"
	I0809 11:10:18.804856    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804859    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804933    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804710    1487 addons.go:69] Setting ingress-dns=true in profile "addons-598000"
	W0809 11:10:18.804966    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	W0809 11:10:18.804974    1487 addons.go:277] "addons-598000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0809 11:10:18.804974    1487 addons.go:231] Setting addon ingress-dns=true in "addons-598000"
	I0809 11:10:18.805006    1487 host.go:66] Checking if "addons-598000" exists ...
	W0809 11:10:18.804966    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	W0809 11:10:18.805030    1487 addons.go:277] "addons-598000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0809 11:10:18.805037    1487 addons.go:467] Verifying addon ingress=true in "addons-598000"
	I0809 11:10:18.804935    1487 addons.go:69] Setting gcp-auth=true in profile "addons-598000"
	I0809 11:10:18.805053    1487 mustload.go:65] Loading cluster: addons-598000
	I0809 11:10:18.805125    1487 config.go:182] Loaded profile config "addons-598000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:10:18.809060    1487 out.go:177] * Verifying ingress addon...
	I0809 11:10:18.804729    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804707    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.804707    1487 host.go:66] Checking if "addons-598000" exists ...
	W0809 11:10:18.805231    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	W0809 11:10:18.805346    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	W0809 11:10:18.805417    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	I0809 11:10:18.812337    1487 addons.go:231] Setting addon default-storageclass=true in "addons-598000"
	W0809 11:10:18.818075    1487 addons.go:277] "addons-598000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0809 11:10:18.818107    1487 addons.go:277] "addons-598000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0809 11:10:18.818310    1487 host.go:54] host status for "addons-598000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	I0809 11:10:18.818578    1487 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0809 11:10:18.821234    1487 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-598000" context rescaled to 1 replicas
	I0809 11:10:18.821991    1487 out.go:177] 
	W0809 11:10:18.821998    1487 addons.go:277] "addons-598000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0809 11:10:18.826023    1487 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.8
	W0809 11:10:18.826058    1487 addons.go:277] "addons-598000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0809 11:10:18.826067    1487 addons.go:467] Verifying addon registry=true in "addons-598000"
	I0809 11:10:18.826070    1487 host.go:66] Checking if "addons-598000" exists ...
	I0809 11:10:18.829818    1487 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0809 11:10:18.836022    1487 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-598000"
	W0809 11:10:18.839984    1487 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/monitor: connect: connection refused
	W0809 11:10:18.839989    1487 out.go:239] * 
	* 
	I0809 11:10:18.836027    1487 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:10:18.836062    1487 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:10:18.836892    1487 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	W0809 11:10:18.840522    1487 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:10:18.849059    1487 out.go:177] * Verifying csi-hostpath-driver addon...
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:10:18.859057    1487 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0809 11:10:18.861235    1487 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0809 11:10:18.864926    1487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 11:10:18.867062    1487 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0809 11:10:18.871015    1487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 11:10:18.871022    1487 out.go:177] * Verifying registry addon...
	I0809 11:10:18.881039    1487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0809 11:10:18.881053    1487 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:10:18.884061    1487 out.go:177] * Verifying Kubernetes components...
	I0809 11:10:18.884100    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:10:18.884107    1487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0809 11:10:18.889976    1487 out.go:177] 
	I0809 11:10:18.893117    1487 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 11:10:18.893182    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:10:18.893189    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:10:18.893555    1487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0809 11:10:18.899097    1487 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/addons-598000/id_rsa Username:docker}
	I0809 11:10:18.899586    1487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0809 11:10:18.903024    1487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-598000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.55s)

                                                
                                    
x
+
TestCertOptions (10.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-294000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-294000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.806336542s)

                                                
                                                
-- stdout --
	* [cert-options-294000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-294000 in cluster cert-options-294000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-294000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-294000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-294000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (77.182625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-294000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-294000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-294000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-294000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-294000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (40.535708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-294000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-294000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-294000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-08-09 11:23:27.46899 -0700 PDT m=+876.609681460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-294000 -n cert-options-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-294000 -n cert-options-294000: exit status 7 (28.416125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-294000
--- FAIL: TestCertOptions (10.08s)
E0809 11:23:42.237739    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:24:09.944140    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:24:23.751549    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.825823334s)

                                                
                                                
-- stdout --
	* [cert-expiration-979000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-979000 in cluster cert-expiration-979000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-979000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229778s)

                                                
                                                
-- stdout --
	* [cert-expiration-979000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-979000 in cluster cert-expiration-979000
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-979000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-979000 in cluster cert-expiration-979000
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-08-09 11:26:27.46464 -0700 PDT m=+1056.608531043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-979000 -n cert-expiration-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-979000 -n cert-expiration-979000: exit status 7 (67.159291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-979000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-979000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-979000
--- FAIL: TestCertExpiration (195.26s)

                                                
                                    
x
+
TestDockerFlags (10.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-050000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-050000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.823956792s)

                                                
                                                
-- stdout --
	* [docker-flags-050000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-050000 in cluster docker-flags-050000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-050000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:23:07.452309    3198 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:23:07.452415    3198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:23:07.452417    3198 out.go:309] Setting ErrFile to fd 2...
	I0809 11:23:07.452420    3198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:23:07.452527    3198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:23:07.453508    3198 out.go:303] Setting JSON to false
	I0809 11:23:07.468722    3198 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1361,"bootTime":1691604026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:23:07.468785    3198 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:23:07.473408    3198 out.go:177] * [docker-flags-050000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:23:07.481274    3198 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:23:07.481328    3198 notify.go:220] Checking for updates...
	I0809 11:23:07.485136    3198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:23:07.488300    3198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:23:07.491261    3198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:23:07.494303    3198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:23:07.497305    3198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:23:07.500997    3198 config.go:182] Loaded profile config "force-systemd-flag-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:23:07.501093    3198 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:23:07.501146    3198 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:23:07.509186    3198 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:23:07.512243    3198 start.go:298] selected driver: qemu2
	I0809 11:23:07.512248    3198 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:23:07.512253    3198 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:23:07.514178    3198 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:23:07.517305    3198 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:23:07.520336    3198 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0809 11:23:07.520361    3198 cni.go:84] Creating CNI manager for ""
	I0809 11:23:07.520368    3198 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:23:07.520372    3198 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:23:07.520384    3198 start_flags.go:319] config:
	{Name:docker-flags-050000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:docker-flags-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:23:07.524760    3198 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:23:07.532121    3198 out.go:177] * Starting control plane node docker-flags-050000 in cluster docker-flags-050000
	I0809 11:23:07.536246    3198 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:23:07.536267    3198 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:23:07.536277    3198 cache.go:57] Caching tarball of preloaded images
	I0809 11:23:07.536348    3198 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:23:07.536354    3198 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:23:07.536432    3198 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/docker-flags-050000/config.json ...
	I0809 11:23:07.536445    3198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/docker-flags-050000/config.json: {Name:mke790379fa31c0ffb8a0ab311878c116f74747b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:23:07.536651    3198 start.go:365] acquiring machines lock for docker-flags-050000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:23:07.536680    3198 start.go:369] acquired machines lock for "docker-flags-050000" in 23.958µs
	I0809 11:23:07.536690    3198 start.go:93] Provisioning new machine with config: &{Name:docker-flags-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:docker-flags-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:23:07.536720    3198 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:23:07.544295    3198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:23:07.559947    3198 start.go:159] libmachine.API.Create for "docker-flags-050000" (driver="qemu2")
	I0809 11:23:07.559973    3198 client.go:168] LocalClient.Create starting
	I0809 11:23:07.560019    3198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:23:07.560043    3198 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:07.560054    3198 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:07.560093    3198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:23:07.560117    3198 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:07.560124    3198 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:07.560433    3198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:23:07.675119    3198 main.go:141] libmachine: Creating SSH key...
	I0809 11:23:07.754079    3198 main.go:141] libmachine: Creating Disk image...
	I0809 11:23:07.754086    3198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:23:07.754224    3198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:07.762742    3198 main.go:141] libmachine: STDOUT: 
	I0809 11:23:07.762758    3198 main.go:141] libmachine: STDERR: 
	I0809 11:23:07.762820    3198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2 +20000M
	I0809 11:23:07.769966    3198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:23:07.769993    3198 main.go:141] libmachine: STDERR: 
	I0809 11:23:07.770003    3198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:07.770010    3198 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:23:07.770049    3198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:76:ee:5b:bb:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:07.771539    3198 main.go:141] libmachine: STDOUT: 
	I0809 11:23:07.771551    3198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:23:07.771569    3198 client.go:171] LocalClient.Create took 211.533792ms
	I0809 11:23:09.774280    3198 start.go:128] duration metric: createHost completed in 2.236971333s
	I0809 11:23:09.774336    3198 start.go:83] releasing machines lock for "docker-flags-050000", held for 2.237074s
	W0809 11:23:09.774396    3198 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:09.785441    3198 out.go:177] * Deleting "docker-flags-050000" in qemu2 ...
	W0809 11:23:09.806828    3198 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:09.806871    3198 start.go:687] Will try again in 5 seconds ...
	I0809 11:23:14.810066    3198 start.go:365] acquiring machines lock for docker-flags-050000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:23:14.810392    3198 start.go:369] acquired machines lock for "docker-flags-050000" in 251.125µs
	I0809 11:23:14.810499    3198 start.go:93] Provisioning new machine with config: &{Name:docker-flags-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:docker-flags-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:23:14.810736    3198 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:23:14.818016    3198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:23:14.860019    3198 start.go:159] libmachine.API.Create for "docker-flags-050000" (driver="qemu2")
	I0809 11:23:14.860053    3198 client.go:168] LocalClient.Create starting
	I0809 11:23:14.860165    3198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:23:14.860216    3198 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:14.860237    3198 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:14.860316    3198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:23:14.860351    3198 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:14.860364    3198 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:14.860946    3198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:23:14.989631    3198 main.go:141] libmachine: Creating SSH key...
	I0809 11:23:15.191005    3198 main.go:141] libmachine: Creating Disk image...
	I0809 11:23:15.191017    3198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:23:15.191169    3198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:15.199782    3198 main.go:141] libmachine: STDOUT: 
	I0809 11:23:15.199796    3198 main.go:141] libmachine: STDERR: 
	I0809 11:23:15.199848    3198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2 +20000M
	I0809 11:23:15.206993    3198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:23:15.207014    3198 main.go:141] libmachine: STDERR: 
	I0809 11:23:15.207030    3198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:15.207036    3198 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:23:15.207079    3198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:16:d8:be:27:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/docker-flags-050000/disk.qcow2
	I0809 11:23:15.208517    3198 main.go:141] libmachine: STDOUT: 
	I0809 11:23:15.208534    3198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:23:15.208548    3198 client.go:171] LocalClient.Create took 348.434459ms
	I0809 11:23:17.211049    3198 start.go:128] duration metric: createHost completed in 2.399908084s
	I0809 11:23:17.211143    3198 start.go:83] releasing machines lock for "docker-flags-050000", held for 2.40036875s
	W0809 11:23:17.211541    3198 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:17.221157    3198 out.go:177] 
	W0809 11:23:17.225294    3198 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:23:17.225358    3198 out.go:239] * 
	* 
	W0809 11:23:17.227772    3198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:23:17.238271    3198 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-050000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-050000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-050000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (86.179417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-050000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-050000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-050000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-050000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-050000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-050000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.611417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-050000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-050000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-050000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-050000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-08-09 11:23:17.386286 -0700 PDT m=+866.527911168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-050000 -n docker-flags-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-050000 -n docker-flags-050000: exit status 7 (28.714792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-050000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-050000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-050000
--- FAIL: TestDockerFlags (10.08s)

                                                
                                    
x
+
TestForceSystemdFlag (11.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-037000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
E0809 11:23:01.824248    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-037000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.083960375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-037000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-037000 in cluster force-systemd-flag-037000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:23:01.137580    3173 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:23:01.137709    3173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:23:01.137712    3173 out.go:309] Setting ErrFile to fd 2...
	I0809 11:23:01.137715    3173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:23:01.137819    3173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:23:01.138807    3173 out.go:303] Setting JSON to false
	I0809 11:23:01.154047    3173 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1355,"bootTime":1691604026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:23:01.154101    3173 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:23:01.158839    3173 out.go:177] * [force-systemd-flag-037000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:23:01.164783    3173 notify.go:220] Checking for updates...
	I0809 11:23:01.167736    3173 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:23:01.171723    3173 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:23:01.175746    3173 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:23:01.178739    3173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:23:01.181745    3173 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:23:01.184743    3173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:23:01.187902    3173 config.go:182] Loaded profile config "force-systemd-env-993000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:23:01.187969    3173 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:23:01.188015    3173 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:23:01.191719    3173 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:23:01.197697    3173 start.go:298] selected driver: qemu2
	I0809 11:23:01.197702    3173 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:23:01.197707    3173 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:23:01.199592    3173 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:23:01.202755    3173 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:23:01.205810    3173 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:23:01.205826    3173 cni.go:84] Creating CNI manager for ""
	I0809 11:23:01.205833    3173 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:23:01.205837    3173 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:23:01.205845    3173 start_flags.go:319] config:
	{Name:force-systemd-flag-037000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-037000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:23:01.209927    3173 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:23:01.212736    3173 out.go:177] * Starting control plane node force-systemd-flag-037000 in cluster force-systemd-flag-037000
	I0809 11:23:01.220812    3173 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:23:01.220830    3173 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:23:01.220839    3173 cache.go:57] Caching tarball of preloaded images
	I0809 11:23:01.220918    3173 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:23:01.220924    3173 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:23:01.220999    3173 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/force-systemd-flag-037000/config.json ...
	I0809 11:23:01.221011    3173 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/force-systemd-flag-037000/config.json: {Name:mk87a3ad4a182798a1906cddfcf4e88a0df79913 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:23:01.221222    3173 start.go:365] acquiring machines lock for force-systemd-flag-037000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:23:01.221260    3173 start.go:369] acquired machines lock for "force-systemd-flag-037000" in 25.834µs
	I0809 11:23:01.221271    3173 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-037000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:23:01.221304    3173 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:23:01.229756    3173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:23:01.245455    3173 start.go:159] libmachine.API.Create for "force-systemd-flag-037000" (driver="qemu2")
	I0809 11:23:01.245488    3173 client.go:168] LocalClient.Create starting
	I0809 11:23:01.245544    3173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:23:01.245572    3173 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:01.245581    3173 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:01.245623    3173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:23:01.245643    3173 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:01.245653    3173 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:01.245982    3173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:23:01.364459    3173 main.go:141] libmachine: Creating SSH key...
	I0809 11:23:01.426135    3173 main.go:141] libmachine: Creating Disk image...
	I0809 11:23:01.426141    3173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:23:01.426288    3173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:01.434904    3173 main.go:141] libmachine: STDOUT: 
	I0809 11:23:01.434920    3173 main.go:141] libmachine: STDERR: 
	I0809 11:23:01.434997    3173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2 +20000M
	I0809 11:23:01.442164    3173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:23:01.442177    3173 main.go:141] libmachine: STDERR: 
	I0809 11:23:01.442193    3173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:01.442199    3173 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:23:01.442231    3173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:03:ff:63:a3:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:01.443710    3173 main.go:141] libmachine: STDOUT: 
	I0809 11:23:01.443725    3173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:23:01.443750    3173 client.go:171] LocalClient.Create took 198.164333ms
	I0809 11:23:03.446814    3173 start.go:128] duration metric: createHost completed in 2.224549583s
	I0809 11:23:03.446893    3173 start.go:83] releasing machines lock for "force-systemd-flag-037000", held for 2.224736375s
	W0809 11:23:03.446946    3173 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:03.458345    3173 out.go:177] * Deleting "force-systemd-flag-037000" in qemu2 ...
	W0809 11:23:03.479094    3173 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:03.479120    3173 start.go:687] Will try again in 5 seconds ...
	I0809 11:23:08.482969    3173 start.go:365] acquiring machines lock for force-systemd-flag-037000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:23:09.774489    3173 start.go:369] acquired machines lock for "force-systemd-flag-037000" in 1.291089542s
	I0809 11:23:09.774579    3173 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-037000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:23:09.774858    3173 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:23:09.780549    3173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:23:09.826973    3173 start.go:159] libmachine.API.Create for "force-systemd-flag-037000" (driver="qemu2")
	I0809 11:23:09.827027    3173 client.go:168] LocalClient.Create starting
	I0809 11:23:09.827169    3173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:23:09.827242    3173 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:09.827262    3173 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:09.827334    3173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:23:09.827374    3173 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:09.827392    3173 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:09.828001    3173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:23:09.957051    3173 main.go:141] libmachine: Creating SSH key...
	I0809 11:23:10.136079    3173 main.go:141] libmachine: Creating Disk image...
	I0809 11:23:10.136086    3173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:23:10.136255    3173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:10.145352    3173 main.go:141] libmachine: STDOUT: 
	I0809 11:23:10.145368    3173 main.go:141] libmachine: STDERR: 
	I0809 11:23:10.145435    3173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2 +20000M
	I0809 11:23:10.152714    3173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:23:10.152749    3173 main.go:141] libmachine: STDERR: 
	I0809 11:23:10.152767    3173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:10.152772    3173 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:23:10.152802    3173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:38:da:06:52:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-flag-037000/disk.qcow2
	I0809 11:23:10.154318    3173 main.go:141] libmachine: STDOUT: 
	I0809 11:23:10.154330    3173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:23:10.154339    3173 client.go:171] LocalClient.Create took 327.230667ms
	I0809 11:23:12.156961    3173 start.go:128] duration metric: createHost completed in 2.381526334s
	I0809 11:23:12.157017    3173 start.go:83] releasing machines lock for "force-systemd-flag-037000", held for 2.38197575s
	W0809 11:23:12.157392    3173 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:12.169134    3173 out.go:177] 
	W0809 11:23:12.174092    3173 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:23:12.174133    3173 out.go:239] * 
	* 
	W0809 11:23:12.176751    3173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:23:12.185987    3173 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-037000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-037000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-037000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.4095ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-037000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-037000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-08-09 11:23:12.279069 -0700 PDT m=+861.421533668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-037000 -n force-systemd-flag-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-037000 -n force-systemd-flag-037000: exit status 7 (32.557209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-037000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-037000
--- FAIL: TestForceSystemdFlag (11.29s)

                                                
                                    
x
+
TestForceSystemdEnv (10.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-993000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-993000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.922345334s)

                                                
                                                
-- stdout --
	* [force-systemd-env-993000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-993000 in cluster force-systemd-env-993000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:22:57.317708    3152 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:22:57.317814    3152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:57.317819    3152 out.go:309] Setting ErrFile to fd 2...
	I0809 11:22:57.317821    3152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:57.317932    3152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:22:57.318938    3152 out.go:303] Setting JSON to false
	I0809 11:22:57.334543    3152 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1351,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:22:57.334627    3152 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:22:57.340297    3152 out.go:177] * [force-systemd-env-993000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:22:57.348321    3152 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:22:57.351360    3152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:22:57.348406    3152 notify.go:220] Checking for updates...
	I0809 11:22:57.357305    3152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:22:57.360375    3152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:22:57.363366    3152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:22:57.366326    3152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0809 11:22:57.369969    3152 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:22:57.370017    3152 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:22:57.374316    3152 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:22:57.381320    3152 start.go:298] selected driver: qemu2
	I0809 11:22:57.381327    3152 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:22:57.381334    3152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:22:57.383218    3152 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:22:57.386321    3152 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:22:57.389430    3152 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:22:57.389443    3152 cni.go:84] Creating CNI manager for ""
	I0809 11:22:57.389451    3152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:22:57.389454    3152 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:22:57.389459    3152 start_flags.go:319] config:
	{Name:force-systemd-env-993000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-env-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:22:57.393307    3152 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:57.400330    3152 out.go:177] * Starting control plane node force-systemd-env-993000 in cluster force-systemd-env-993000
	I0809 11:22:57.404336    3152 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:22:57.404351    3152 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:22:57.404357    3152 cache.go:57] Caching tarball of preloaded images
	I0809 11:22:57.404419    3152 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:22:57.404427    3152 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:22:57.404476    3152 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/force-systemd-env-993000/config.json ...
	I0809 11:22:57.404487    3152 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/force-systemd-env-993000/config.json: {Name:mk352380357f05709bee77ca135afbfda1b12a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:22:57.404655    3152 start.go:365] acquiring machines lock for force-systemd-env-993000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:22:57.404683    3152 start.go:369] acquired machines lock for "force-systemd-env-993000" in 20.792µs
	I0809 11:22:57.404691    3152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-env-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:22:57.404725    3152 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:22:57.413346    3152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:22:57.427130    3152 start.go:159] libmachine.API.Create for "force-systemd-env-993000" (driver="qemu2")
	I0809 11:22:57.427151    3152 client.go:168] LocalClient.Create starting
	I0809 11:22:57.427209    3152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:22:57.427235    3152 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:57.427249    3152 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:57.427290    3152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:22:57.427309    3152 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:57.427315    3152 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:57.427624    3152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:22:57.545687    3152 main.go:141] libmachine: Creating SSH key...
	I0809 11:22:57.778311    3152 main.go:141] libmachine: Creating Disk image...
	I0809 11:22:57.778323    3152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:22:57.778504    3152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:22:57.787923    3152 main.go:141] libmachine: STDOUT: 
	I0809 11:22:57.787945    3152 main.go:141] libmachine: STDERR: 
	I0809 11:22:57.788043    3152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2 +20000M
	I0809 11:22:57.796498    3152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:22:57.796520    3152 main.go:141] libmachine: STDERR: 
	I0809 11:22:57.796544    3152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:22:57.796552    3152 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:22:57.796602    3152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:88:01:57:87:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:22:57.798529    3152 main.go:141] libmachine: STDOUT: 
	I0809 11:22:57.798543    3152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:22:57.798572    3152 client.go:171] LocalClient.Create took 371.217167ms
	I0809 11:22:59.801780    3152 start.go:128] duration metric: createHost completed in 2.395801125s
	I0809 11:22:59.801895    3152 start.go:83] releasing machines lock for "force-systemd-env-993000", held for 2.395977417s
	W0809 11:22:59.802016    3152 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:59.808454    3152 out.go:177] * Deleting "force-systemd-env-993000" in qemu2 ...
	W0809 11:22:59.835152    3152 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:59.835188    3152 start.go:687] Will try again in 5 seconds ...
	I0809 11:23:04.839442    3152 start.go:365] acquiring machines lock for force-systemd-env-993000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:23:04.839844    3152 start.go:369] acquired machines lock for "force-systemd-env-993000" in 300.416µs
	I0809 11:23:04.839984    3152 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-993000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-env-993000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:23:04.840232    3152 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:23:04.849711    3152 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0809 11:23:04.895933    3152 start.go:159] libmachine.API.Create for "force-systemd-env-993000" (driver="qemu2")
	I0809 11:23:04.895972    3152 client.go:168] LocalClient.Create starting
	I0809 11:23:04.896075    3152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:23:04.896139    3152 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:04.896159    3152 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:04.896243    3152 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:23:04.896292    3152 main.go:141] libmachine: Decoding PEM data...
	I0809 11:23:04.896305    3152 main.go:141] libmachine: Parsing certificate...
	I0809 11:23:04.897202    3152 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:23:05.029733    3152 main.go:141] libmachine: Creating SSH key...
	I0809 11:23:05.157002    3152 main.go:141] libmachine: Creating Disk image...
	I0809 11:23:05.157010    3152 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:23:05.157165    3152 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:23:05.165803    3152 main.go:141] libmachine: STDOUT: 
	I0809 11:23:05.165824    3152 main.go:141] libmachine: STDERR: 
	I0809 11:23:05.165887    3152 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2 +20000M
	I0809 11:23:05.173049    3152 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:23:05.173063    3152 main.go:141] libmachine: STDERR: 
	I0809 11:23:05.173088    3152 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:23:05.173094    3152 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:23:05.173153    3152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:0c:87:ed:1b:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/force-systemd-env-993000/disk.qcow2
	I0809 11:23:05.174642    3152 main.go:141] libmachine: STDOUT: 
	I0809 11:23:05.174655    3152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:23:05.174669    3152 client.go:171] LocalClient.Create took 278.600167ms
	I0809 11:23:07.177462    3152 start.go:128] duration metric: createHost completed in 2.336472125s
	I0809 11:23:07.177557    3152 start.go:83] releasing machines lock for "force-systemd-env-993000", held for 2.336966458s
	W0809 11:23:07.178019    3152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:23:07.186766    3152 out.go:177] 
	W0809 11:23:07.191741    3152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:23:07.191766    3152 out.go:239] * 
	* 
	W0809 11:23:07.194386    3152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:23:07.202786    3152 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-993000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-993000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-993000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.788042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-993000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-993000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-08-09 11:23:07.295369 -0700 PDT m=+856.439015835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-993000 -n force-systemd-env-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-993000 -n force-systemd-env-993000: exit status 7 (34.133666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-993000
--- FAIL: TestForceSystemdEnv (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-901000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-901000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-2gr4w" [c4c18db8-f524-4c1b-8037-0b0e7861d4e7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-2gr4w" [c4c18db8-f524-4c1b-8037-0b0e7861d4e7] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.012957417s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:30162
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:30162: Get "http://192.168.105.4:30162": dial tcp 192.168.105.4:30162: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-901000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-2gr4w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-901000/192.168.105.4
Start Time:       Wed, 09 Aug 2023 11:13:53 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 09 Aug 2023 11:14:12 -0700
Finished:     Wed, 09 Aug 2023 11:14:12 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6fz5s (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-6fz5s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-2gr4w to functional-901000
Normal   Pulling    33s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.403270621s (5.403274621s including waiting)
Normal   Created    14s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 26s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-2gr4w_default(c4c18db8-f524-4c1b-8037-0b0e7861d4e7)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-901000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-901000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.176.185
IPs:                      10.103.176.185
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30162/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-901000 -n functional-901000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-901000 ssh echo                                                                                           | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT | 09 Aug 23 11:13 PDT |
	|         | hello                                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh -n                                                                                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT | 09 Aug 23 11:13 PDT |
	|         | functional-901000 sudo cat                                                                                           |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh cat                                                                                            | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT | 09 Aug 23 11:13 PDT |
	|         | /etc/hostname                                                                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-901000 tunnel                                                                                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-901000 tunnel                                                                                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-901000 tunnel                                                                                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| addons  | functional-901000 addons list                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT | 09 Aug 23 11:13 PDT |
	| addons  | functional-901000 addons list                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:13 PDT | 09 Aug 23 11:13 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-901000 service                                                                                            | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-901000 service list                                                                                       | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| service | functional-901000 service list                                                                                       | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-901000 service                                                                                            | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-901000                                                                                                    | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-901000 service                                                                                            | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-901000                                                                                                 | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh sudo                                                                                           | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-901000 ssh findmnt                                                                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-901000                                                                                                 | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1457834126/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:12:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:12:59.410458    1792 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:12:59.410571    1792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:12:59.410573    1792 out.go:309] Setting ErrFile to fd 2...
	I0809 11:12:59.410574    1792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:12:59.410676    1792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:12:59.411698    1792 out.go:303] Setting JSON to false
	I0809 11:12:59.427008    1792 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":753,"bootTime":1691604026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:12:59.427112    1792 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:12:59.431158    1792 out.go:177] * [functional-901000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:12:59.439190    1792 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:12:59.443111    1792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:12:59.439256    1792 notify.go:220] Checking for updates...
	I0809 11:12:59.449096    1792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:12:59.452208    1792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:12:59.455204    1792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:12:59.456531    1792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:12:59.459422    1792 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:12:59.459463    1792 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:12:59.464123    1792 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:12:59.469160    1792 start.go:298] selected driver: qemu2
	I0809 11:12:59.469162    1792 start.go:901] validating driver "qemu2" against &{Name:functional-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:functional-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:12:59.469784    1792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:12:59.472598    1792 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:12:59.472622    1792 cni.go:84] Creating CNI manager for ""
	I0809 11:12:59.472629    1792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:12:59.472634    1792 start_flags.go:319] config:
	{Name:functional-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-901000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:12:59.476767    1792 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:12:59.485148    1792 out.go:177] * Starting control plane node functional-901000 in cluster functional-901000
	I0809 11:12:59.489107    1792 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:12:59.489119    1792 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:12:59.489125    1792 cache.go:57] Caching tarball of preloaded images
	I0809 11:12:59.489165    1792 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:12:59.489168    1792 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:12:59.489224    1792 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/config.json ...
	I0809 11:12:59.489502    1792 start.go:365] acquiring machines lock for functional-901000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:12:59.489528    1792 start.go:369] acquired machines lock for "functional-901000" in 21.459µs
	I0809 11:12:59.489535    1792 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:12:59.489538    1792 fix.go:54] fixHost starting: 
	I0809 11:12:59.490112    1792 fix.go:102] recreateIfNeeded on functional-901000: state=Running err=<nil>
	W0809 11:12:59.490119    1792 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:12:59.498137    1792 out.go:177] * Updating the running qemu2 "functional-901000" VM ...
	I0809 11:12:59.502135    1792 machine.go:88] provisioning docker machine ...
	I0809 11:12:59.502143    1792 buildroot.go:166] provisioning hostname "functional-901000"
	I0809 11:12:59.502171    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:12:59.502409    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:12:59.502413    1792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-901000 && echo "functional-901000" | sudo tee /etc/hostname
	I0809 11:12:59.554782    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-901000
	
	I0809 11:12:59.554822    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:12:59.555069    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:12:59.555076    1792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-901000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-901000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-901000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 11:12:59.606282    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 11:12:59.606288    1792 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17011-995/.minikube CaCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17011-995/.minikube}
	I0809 11:12:59.606295    1792 buildroot.go:174] setting up certificates
	I0809 11:12:59.606299    1792 provision.go:83] configureAuth start
	I0809 11:12:59.606301    1792 provision.go:138] copyHostCerts
	I0809 11:12:59.606366    1792 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem, removing ...
	I0809 11:12:59.606369    1792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem
	I0809 11:12:59.606471    1792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem (1082 bytes)
	I0809 11:12:59.606648    1792 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem, removing ...
	I0809 11:12:59.606650    1792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem
	I0809 11:12:59.606722    1792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem (1123 bytes)
	I0809 11:12:59.606833    1792 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem, removing ...
	I0809 11:12:59.606835    1792 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem
	I0809 11:12:59.606890    1792 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem (1679 bytes)
	I0809 11:12:59.606967    1792 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem org=jenkins.functional-901000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-901000]
	I0809 11:12:59.768038    1792 provision.go:172] copyRemoteCerts
	I0809 11:12:59.768076    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 11:12:59.768082    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:12:59.797154    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 11:12:59.804667    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0809 11:12:59.812016    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0809 11:12:59.819195    1792 provision.go:86] duration metric: configureAuth took 212.90025ms
	I0809 11:12:59.819200    1792 buildroot.go:189] setting minikube options for container-runtime
	I0809 11:12:59.819314    1792 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:12:59.819345    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:12:59.819567    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:12:59.819570    1792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0809 11:12:59.870364    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0809 11:12:59.870369    1792 buildroot.go:70] root file system type: tmpfs
	I0809 11:12:59.870426    1792 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0809 11:12:59.870485    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:12:59.870724    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:12:59.870756    1792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0809 11:12:59.925903    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0809 11:12:59.925947    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:12:59.926179    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:12:59.926186    1792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0809 11:12:59.978764    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 11:12:59.978770    1792 machine.go:91] provisioned docker machine in 476.647958ms
	I0809 11:12:59.978775    1792 start.go:300] post-start starting for "functional-901000" (driver="qemu2")
	I0809 11:12:59.978779    1792 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 11:12:59.978828    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 11:12:59.978834    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:13:00.005735    1792 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 11:13:00.007114    1792 info.go:137] Remote host: Buildroot 2021.02.12
	I0809 11:13:00.007118    1792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/addons for local assets ...
	I0809 11:13:00.007178    1792 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/files for local assets ...
	I0809 11:13:00.007275    1792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem -> 14102.pem in /etc/ssl/certs
	I0809 11:13:00.007372    1792 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/test/nested/copy/1410/hosts -> hosts in /etc/test/nested/copy/1410
	I0809 11:13:00.007401    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1410
	I0809 11:13:00.010069    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:13:00.016933    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/test/nested/copy/1410/hosts --> /etc/test/nested/copy/1410/hosts (40 bytes)
	I0809 11:13:00.023910    1792 start.go:303] post-start completed in 45.131875ms
	I0809 11:13:00.023914    1792 fix.go:56] fixHost completed within 534.395792ms
	I0809 11:13:00.023955    1792 main.go:141] libmachine: Using SSH client type: native
	I0809 11:13:00.024196    1792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104691590] 0x104693ff0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0809 11:13:00.024199    1792 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0809 11:13:00.074169    1792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691604780.262468521
	
	I0809 11:13:00.074178    1792 fix.go:206] guest clock: 1691604780.262468521
	I0809 11:13:00.074181    1792 fix.go:219] Guest: 2023-08-09 11:13:00.262468521 -0700 PDT Remote: 2023-08-09 11:13:00.023915 -0700 PDT m=+0.633011501 (delta=238.553521ms)
	I0809 11:13:00.074193    1792 fix.go:190] guest clock delta is within tolerance: 238.553521ms
	I0809 11:13:00.074195    1792 start.go:83] releasing machines lock for "functional-901000", held for 584.68475ms
	I0809 11:13:00.074507    1792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 11:13:00.074508    1792 ssh_runner.go:195] Run: cat /version.json
	I0809 11:13:00.074513    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:13:00.074524    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:13:00.147557    1792 ssh_runner.go:195] Run: systemctl --version
	I0809 11:13:00.149796    1792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0809 11:13:00.151701    1792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0809 11:13:00.151726    1792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 11:13:00.154804    1792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0809 11:13:00.154810    1792 start.go:466] detecting cgroup driver to use...
	I0809 11:13:00.154874    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:13:00.160331    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0809 11:13:00.163605    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0809 11:13:00.166581    1792 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0809 11:13:00.166606    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0809 11:13:00.169424    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:13:00.172616    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0809 11:13:00.176011    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:13:00.179395    1792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 11:13:00.182205    1792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0809 11:13:00.185016    1792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 11:13:00.188109    1792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 11:13:00.190842    1792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:13:00.270472    1792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0809 11:13:00.280970    1792 start.go:466] detecting cgroup driver to use...
	I0809 11:13:00.281030    1792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0809 11:13:00.286848    1792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:13:00.291740    1792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 11:13:00.302226    1792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:13:00.306734    1792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:13:00.311156    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:13:00.316712    1792 ssh_runner.go:195] Run: which cri-dockerd
	I0809 11:13:00.318137    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0809 11:13:00.320757    1792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0809 11:13:00.326046    1792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0809 11:13:00.396474    1792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0809 11:13:00.472112    1792 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0809 11:13:00.472121    1792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0809 11:13:00.477860    1792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:13:00.552326    1792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:13:11.944165    1792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.392215334s)
	I0809 11:13:11.944240    1792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:13:12.017957    1792 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0809 11:13:12.103335    1792 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:13:12.175524    1792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:13:12.254493    1792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0809 11:13:12.261319    1792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:13:12.320902    1792 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0809 11:13:12.347870    1792 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0809 11:13:12.347938    1792 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0809 11:13:12.350071    1792 start.go:534] Will wait 60s for crictl version
	I0809 11:13:12.350095    1792 ssh_runner.go:195] Run: which crictl
	I0809 11:13:12.351448    1792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 11:13:12.362918    1792 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0809 11:13:12.362996    1792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:13:12.370940    1792 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:13:12.385609    1792 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0809 11:13:12.385742    1792 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0809 11:13:12.391629    1792 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0809 11:13:12.393166    1792 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:13:12.393225    1792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:13:12.399522    1792 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-901000
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0809 11:13:12.399530    1792 docker.go:566] Images already preloaded, skipping extraction
	I0809 11:13:12.399573    1792 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:13:12.409200    1792 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-901000
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0809 11:13:12.409205    1792 cache_images.go:84] Images are preloaded, skipping loading
	I0809 11:13:12.409249    1792 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0809 11:13:12.416536    1792 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0809 11:13:12.416550    1792 cni.go:84] Creating CNI manager for ""
	I0809 11:13:12.416554    1792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:13:12.416565    1792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 11:13:12.416572    1792 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-901000 NodeName:functional-901000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 11:13:12.416624    1792 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-901000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 11:13:12.416650    1792 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-901000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:functional-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0809 11:13:12.416956    1792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 11:13:12.420688    1792 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 11:13:12.420741    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 11:13:12.423970    1792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0809 11:13:12.429214    1792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 11:13:12.434177    1792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0809 11:13:12.439173    1792 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0809 11:13:12.440620    1792 certs.go:56] Setting up /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000 for IP: 192.168.105.4
	I0809 11:13:12.440626    1792 certs.go:190] acquiring lock for shared ca certs: {Name:mkc408918270161d0a558be6b69aedd9ebd20eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:13:12.440745    1792 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key
	I0809 11:13:12.440782    1792 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key
	I0809 11:13:12.440833    1792 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.key
	I0809 11:13:12.440870    1792 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/apiserver.key.942c473b
	I0809 11:13:12.440904    1792 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/proxy-client.key
	I0809 11:13:12.441052    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem (1338 bytes)
	W0809 11:13:12.441075    1792 certs.go:433] ignoring /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410_empty.pem, impossibly tiny 0 bytes
	I0809 11:13:12.441081    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem (1679 bytes)
	I0809 11:13:12.441099    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem (1082 bytes)
	I0809 11:13:12.441116    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem (1123 bytes)
	I0809 11:13:12.441136    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem (1679 bytes)
	I0809 11:13:12.441176    1792 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:13:12.441458    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 11:13:12.448445    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0809 11:13:12.455797    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 11:13:12.462812    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0809 11:13:12.469751    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 11:13:12.476595    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0809 11:13:12.484141    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 11:13:12.491832    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0809 11:13:12.499360    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /usr/share/ca-certificates/14102.pem (1708 bytes)
	I0809 11:13:12.506381    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 11:13:12.513400    1792 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem --> /usr/share/ca-certificates/1410.pem (1338 bytes)
	I0809 11:13:12.520531    1792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 11:13:12.526142    1792 ssh_runner.go:195] Run: openssl version
	I0809 11:13:12.528207    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14102.pem && ln -fs /usr/share/ca-certificates/14102.pem /etc/ssl/certs/14102.pem"
	I0809 11:13:12.531953    1792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14102.pem
	I0809 11:13:12.533510    1792 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:10 /usr/share/ca-certificates/14102.pem
	I0809 11:13:12.533529    1792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14102.pem
	I0809 11:13:12.535392    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14102.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 11:13:12.538143    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 11:13:12.541189    1792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:13:12.542751    1792 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:13:12.542769    1792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:13:12.544640    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 11:13:12.548034    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1410.pem && ln -fs /usr/share/ca-certificates/1410.pem /etc/ssl/certs/1410.pem"
	I0809 11:13:12.551600    1792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1410.pem
	I0809 11:13:12.553387    1792 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:10 /usr/share/ca-certificates/1410.pem
	I0809 11:13:12.553408    1792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1410.pem
	I0809 11:13:12.555248    1792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1410.pem /etc/ssl/certs/51391683.0"
	I0809 11:13:12.558186    1792 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 11:13:12.562041    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0809 11:13:12.564681    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0809 11:13:12.566735    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0809 11:13:12.568558    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0809 11:13:12.570692    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0809 11:13:12.572402    1792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0809 11:13:12.574367    1792 kubeadm.go:404] StartCluster: {Name:functional-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.4 ClusterName:functional-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:13:12.574433    1792 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0809 11:13:12.580466    1792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 11:13:12.583820    1792 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0809 11:13:12.583831    1792 kubeadm.go:636] restartCluster start
	I0809 11:13:12.583862    1792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0809 11:13:12.586773    1792 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0809 11:13:12.587041    1792 kubeconfig.go:92] found "functional-901000" server: "https://192.168.105.4:8441"
	I0809 11:13:12.587781    1792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0809 11:13:12.590658    1792 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0809 11:13:12.590661    1792 kubeadm.go:1128] stopping kube-system containers ...
	I0809 11:13:12.590697    1792 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0809 11:13:12.597843    1792 docker.go:462] Stopping containers: [8538ac79f84f f6881adcd1a3 b4fad4545ee1 0bf9b350bab2 410b8d7c0d58 1f01e80072e3 d5fd913ff98d e272a64a3990 473980efae2f 400ab6659fb7 2f28d3bb8322 e00cde5af0aa e45f4dd29baf 1dbf32a1757b c15de5eeee14 b86bc711fdb4 129dbda8ee46 4a01d1fb6938 fb664f79cece 4bbc39efcc33 4cda4c32e0a9 1aad2fdee16c 712bca7d7b7a efc0670d7180 3b0aabf680b9 d8f524685093 04910c8b6925 8fba3349a13d 7ac02ac353c2]
	I0809 11:13:12.597900    1792 ssh_runner.go:195] Run: docker stop 8538ac79f84f f6881adcd1a3 b4fad4545ee1 0bf9b350bab2 410b8d7c0d58 1f01e80072e3 d5fd913ff98d e272a64a3990 473980efae2f 400ab6659fb7 2f28d3bb8322 e00cde5af0aa e45f4dd29baf 1dbf32a1757b c15de5eeee14 b86bc711fdb4 129dbda8ee46 4a01d1fb6938 fb664f79cece 4bbc39efcc33 4cda4c32e0a9 1aad2fdee16c 712bca7d7b7a efc0670d7180 3b0aabf680b9 d8f524685093 04910c8b6925 8fba3349a13d 7ac02ac353c2
	I0809 11:13:12.604487    1792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0809 11:13:12.696781    1792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 11:13:12.701571    1792 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  9 18:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  9 18:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  9 18:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  9 18:11 /etc/kubernetes/scheduler.conf
	
	I0809 11:13:12.701607    1792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0809 11:13:12.704928    1792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0809 11:13:12.708357    1792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0809 11:13:12.711991    1792 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0809 11:13:12.712012    1792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0809 11:13:12.715495    1792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0809 11:13:12.718390    1792 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0809 11:13:12.718411    1792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0809 11:13:12.721164    1792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 11:13:12.724479    1792 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0809 11:13:12.724482    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:12.744960    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:13.250833    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:13.363864    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:13.392167    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:13.421730    1792 api_server.go:52] waiting for apiserver process to appear ...
	I0809 11:13:13.421786    1792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:13:13.426202    1792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:13:13.934734    1792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:13:14.434723    1792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:13:14.439048    1792 api_server.go:72] duration metric: took 1.017353209s to wait for apiserver process to appear ...
	I0809 11:13:14.439053    1792 api_server.go:88] waiting for apiserver healthz status ...
	I0809 11:13:14.439065    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:16.542608    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0809 11:13:16.542616    1792 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 11:13:16.542621    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:16.562111    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0809 11:13:16.562121    1792 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 11:13:17.064205    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:17.074597    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0809 11:13:17.074611    1792 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0809 11:13:17.564149    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:17.567755    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0809 11:13:17.567760    1792 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0809 11:13:18.064120    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:18.067593    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0809 11:13:18.074612    1792 api_server.go:141] control plane version: v1.27.4
	I0809 11:13:18.074620    1792 api_server.go:131] duration metric: took 3.635688833s to wait for apiserver health ...
	I0809 11:13:18.074624    1792 cni.go:84] Creating CNI manager for ""
	I0809 11:13:18.074629    1792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:13:18.079042    1792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0809 11:13:18.082090    1792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0809 11:13:18.085051    1792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0809 11:13:18.089752    1792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 11:13:18.094850    1792 system_pods.go:59] 7 kube-system pods found
	I0809 11:13:18.094857    1792 system_pods.go:61] "coredns-5d78c9869d-zmmb2" [08d59c5d-5c40-459b-b26f-0784ff45add8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0809 11:13:18.094860    1792 system_pods.go:61] "etcd-functional-901000" [ece1b63b-7341-48a7-a83a-dea3af23b2f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0809 11:13:18.094863    1792 system_pods.go:61] "kube-apiserver-functional-901000" [d0f084d2-388d-46ff-9862-80725efd501c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0809 11:13:18.094866    1792 system_pods.go:61] "kube-controller-manager-functional-901000" [0c635924-bef8-4c57-a2b1-922fd7f24d82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0809 11:13:18.094868    1792 system_pods.go:61] "kube-proxy-xqqwn" [aa3b855e-b0df-45e7-aee6-e5d9ab8e96c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0809 11:13:18.094871    1792 system_pods.go:61] "kube-scheduler-functional-901000" [b1781f65-abbd-49dc-b8dc-7bc93f902d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0809 11:13:18.094873    1792 system_pods.go:61] "storage-provisioner" [2dca2a2b-adb1-41a7-9fe6-7e1b3de8c91e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0809 11:13:18.094875    1792 system_pods.go:74] duration metric: took 5.12075ms to wait for pod list to return data ...
	I0809 11:13:18.094877    1792 node_conditions.go:102] verifying NodePressure condition ...
	I0809 11:13:18.096408    1792 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0809 11:13:18.096418    1792 node_conditions.go:123] node cpu capacity is 2
	I0809 11:13:18.096423    1792 node_conditions.go:105] duration metric: took 1.544417ms to run NodePressure ...
	I0809 11:13:18.096431    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 11:13:18.164960    1792 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0809 11:13:18.167938    1792 kubeadm.go:787] kubelet initialised
	I0809 11:13:18.167942    1792 kubeadm.go:788] duration metric: took 2.975959ms waiting for restarted kubelet to initialise ...
	I0809 11:13:18.167945    1792 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:13:18.172011    1792 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:20.190970    1792 pod_ready.go:102] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:22.191671    1792 pod_ready.go:102] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:24.682209    1792 pod_ready.go:102] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:26.692012    1792 pod_ready.go:102] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:27.188897    1792 pod_ready.go:92] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:27.188916    1792 pod_ready.go:81] duration metric: took 9.017205625s waiting for pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:27.188929    1792 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:27.197555    1792 pod_ready.go:92] pod "etcd-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:27.197593    1792 pod_ready.go:81] duration metric: took 8.63425ms waiting for pod "etcd-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:27.197603    1792 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:29.217229    1792 pod_ready.go:102] pod "kube-apiserver-functional-901000" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:31.224859    1792 pod_ready.go:102] pod "kube-apiserver-functional-901000" in "kube-system" namespace has status "Ready":"False"
	I0809 11:13:33.225235    1792 pod_ready.go:92] pod "kube-apiserver-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:33.225258    1792 pod_ready.go:81] duration metric: took 6.027851334s waiting for pod "kube-apiserver-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.225274    1792 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.233640    1792 pod_ready.go:92] pod "kube-controller-manager-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:33.233651    1792 pod_ready.go:81] duration metric: took 8.369041ms waiting for pod "kube-controller-manager-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.233663    1792 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqqwn" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.240462    1792 pod_ready.go:92] pod "kube-proxy-xqqwn" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:33.240471    1792 pod_ready.go:81] duration metric: took 6.802916ms waiting for pod "kube-proxy-xqqwn" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.240483    1792 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.247929    1792 pod_ready.go:92] pod "kube-scheduler-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:33.247942    1792 pod_ready.go:81] duration metric: took 7.453625ms waiting for pod "kube-scheduler-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.247956    1792 pod_ready.go:38] duration metric: took 15.080519542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:13:33.247988    1792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 11:13:33.257162    1792 ops.go:34] apiserver oom_adj: -16
	I0809 11:13:33.257168    1792 kubeadm.go:640] restartCluster took 20.674037584s
	I0809 11:13:33.257173    1792 kubeadm.go:406] StartCluster complete in 20.683512125s
	I0809 11:13:33.257186    1792 settings.go:142] acquiring lock: {Name:mkccab662ae5271e860bc4bdf3048d54a609848d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:13:33.257352    1792 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:13:33.257977    1792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/kubeconfig: {Name:mk08b0de0097dc34716acdd012f0f4571979d434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:13:33.258368    1792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 11:13:33.258399    1792 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0809 11:13:33.258474    1792 addons.go:69] Setting storage-provisioner=true in profile "functional-901000"
	I0809 11:13:33.258486    1792 addons.go:231] Setting addon storage-provisioner=true in "functional-901000"
	W0809 11:13:33.258492    1792 addons.go:240] addon storage-provisioner should already be in state true
	I0809 11:13:33.258547    1792 host.go:66] Checking if "functional-901000" exists ...
	I0809 11:13:33.258578    1792 addons.go:69] Setting default-storageclass=true in profile "functional-901000"
	I0809 11:13:33.258582    1792 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:13:33.258591    1792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-901000"
	I0809 11:13:33.265440    1792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:13:33.269297    1792 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:13:33.269302    1792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 11:13:33.269312    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:13:33.270013    1792 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-901000" context rescaled to 1 replicas
	I0809 11:13:33.270034    1792 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:13:33.274362    1792 out.go:177] * Verifying Kubernetes components...
	I0809 11:13:33.275464    1792 addons.go:231] Setting addon default-storageclass=true in "functional-901000"
	W0809 11:13:33.282379    1792 addons.go:240] addon default-storageclass should already be in state true
	I0809 11:13:33.282396    1792 host.go:66] Checking if "functional-901000" exists ...
	I0809 11:13:33.282429    1792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 11:13:33.283321    1792 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 11:13:33.283325    1792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 11:13:33.283331    1792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
	I0809 11:13:33.317038    1792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:13:33.320431    1792 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0809 11:13:33.320446    1792 node_ready.go:35] waiting up to 6m0s for node "functional-901000" to be "Ready" ...
	I0809 11:13:33.321876    1792 node_ready.go:49] node "functional-901000" has status "Ready":"True"
	I0809 11:13:33.321883    1792 node_ready.go:38] duration metric: took 1.427958ms waiting for node "functional-901000" to be "Ready" ...
	I0809 11:13:33.321885    1792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:13:33.324488    1792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.332324    1792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 11:13:33.615271    1792 pod_ready.go:92] pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:33.615275    1792 pod_ready.go:81] duration metric: took 290.792875ms waiting for pod "coredns-5d78c9869d-zmmb2" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.615279    1792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:33.685160    1792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0809 11:13:33.689147    1792 addons.go:502] enable addons completed in 430.768ms: enabled=[storage-provisioner default-storageclass]
	I0809 11:13:34.018434    1792 pod_ready.go:92] pod "etcd-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:34.018460    1792 pod_ready.go:81] duration metric: took 403.187667ms waiting for pod "etcd-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:34.018473    1792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:34.419075    1792 pod_ready.go:92] pod "kube-apiserver-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:34.419104    1792 pod_ready.go:81] duration metric: took 400.628583ms waiting for pod "kube-apiserver-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:34.419131    1792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:34.817564    1792 pod_ready.go:92] pod "kube-controller-manager-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:34.817572    1792 pod_ready.go:81] duration metric: took 398.442208ms waiting for pod "kube-controller-manager-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:34.817580    1792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xqqwn" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:35.220487    1792 pod_ready.go:92] pod "kube-proxy-xqqwn" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:35.220507    1792 pod_ready.go:81] duration metric: took 402.932417ms waiting for pod "kube-proxy-xqqwn" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:35.220526    1792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:35.616464    1792 pod_ready.go:92] pod "kube-scheduler-functional-901000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:13:35.616474    1792 pod_ready.go:81] duration metric: took 395.948792ms waiting for pod "kube-scheduler-functional-901000" in "kube-system" namespace to be "Ready" ...
	I0809 11:13:35.616481    1792 pod_ready.go:38] duration metric: took 2.294669167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:13:35.616494    1792 api_server.go:52] waiting for apiserver process to appear ...
	I0809 11:13:35.616619    1792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:13:35.626836    1792 api_server.go:72] duration metric: took 2.356866792s to wait for apiserver process to appear ...
	I0809 11:13:35.626842    1792 api_server.go:88] waiting for apiserver healthz status ...
	I0809 11:13:35.626853    1792 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0809 11:13:35.632928    1792 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0809 11:13:35.633980    1792 api_server.go:141] control plane version: v1.27.4
	I0809 11:13:35.633987    1792 api_server.go:131] duration metric: took 7.142041ms to wait for apiserver health ...
	I0809 11:13:35.633992    1792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 11:13:35.824797    1792 system_pods.go:59] 7 kube-system pods found
	I0809 11:13:35.824807    1792 system_pods.go:61] "coredns-5d78c9869d-zmmb2" [08d59c5d-5c40-459b-b26f-0784ff45add8] Running
	I0809 11:13:35.824811    1792 system_pods.go:61] "etcd-functional-901000" [ece1b63b-7341-48a7-a83a-dea3af23b2f9] Running
	I0809 11:13:35.824814    1792 system_pods.go:61] "kube-apiserver-functional-901000" [d0f084d2-388d-46ff-9862-80725efd501c] Running
	I0809 11:13:35.824818    1792 system_pods.go:61] "kube-controller-manager-functional-901000" [0c635924-bef8-4c57-a2b1-922fd7f24d82] Running
	I0809 11:13:35.824820    1792 system_pods.go:61] "kube-proxy-xqqwn" [aa3b855e-b0df-45e7-aee6-e5d9ab8e96c5] Running
	I0809 11:13:35.824823    1792 system_pods.go:61] "kube-scheduler-functional-901000" [b1781f65-abbd-49dc-b8dc-7bc93f902d05] Running
	I0809 11:13:35.824826    1792 system_pods.go:61] "storage-provisioner" [2dca2a2b-adb1-41a7-9fe6-7e1b3de8c91e] Running
	I0809 11:13:35.824829    1792 system_pods.go:74] duration metric: took 190.840542ms to wait for pod list to return data ...
	I0809 11:13:35.824833    1792 default_sa.go:34] waiting for default service account to be created ...
	I0809 11:13:36.021099    1792 default_sa.go:45] found service account: "default"
	I0809 11:13:36.021124    1792 default_sa.go:55] duration metric: took 196.29075ms for default service account to be created ...
	I0809 11:13:36.021145    1792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 11:13:36.227312    1792 system_pods.go:86] 7 kube-system pods found
	I0809 11:13:36.227335    1792 system_pods.go:89] "coredns-5d78c9869d-zmmb2" [08d59c5d-5c40-459b-b26f-0784ff45add8] Running
	I0809 11:13:36.227344    1792 system_pods.go:89] "etcd-functional-901000" [ece1b63b-7341-48a7-a83a-dea3af23b2f9] Running
	I0809 11:13:36.227352    1792 system_pods.go:89] "kube-apiserver-functional-901000" [d0f084d2-388d-46ff-9862-80725efd501c] Running
	I0809 11:13:36.227360    1792 system_pods.go:89] "kube-controller-manager-functional-901000" [0c635924-bef8-4c57-a2b1-922fd7f24d82] Running
	I0809 11:13:36.227367    1792 system_pods.go:89] "kube-proxy-xqqwn" [aa3b855e-b0df-45e7-aee6-e5d9ab8e96c5] Running
	I0809 11:13:36.227374    1792 system_pods.go:89] "kube-scheduler-functional-901000" [b1781f65-abbd-49dc-b8dc-7bc93f902d05] Running
	I0809 11:13:36.227380    1792 system_pods.go:89] "storage-provisioner" [2dca2a2b-adb1-41a7-9fe6-7e1b3de8c91e] Running
	I0809 11:13:36.227391    1792 system_pods.go:126] duration metric: took 206.246416ms to wait for k8s-apps to be running ...
	I0809 11:13:36.227405    1792 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 11:13:36.227643    1792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 11:13:36.245110    1792 system_svc.go:56] duration metric: took 17.707625ms WaitForService to wait for kubelet.
	I0809 11:13:36.245122    1792 kubeadm.go:581] duration metric: took 2.975172959s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 11:13:36.245141    1792 node_conditions.go:102] verifying NodePressure condition ...
	I0809 11:13:36.421357    1792 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0809 11:13:36.421383    1792 node_conditions.go:123] node cpu capacity is 2
	I0809 11:13:36.421410    1792 node_conditions.go:105] duration metric: took 176.265416ms to run NodePressure ...
	I0809 11:13:36.421433    1792 start.go:228] waiting for startup goroutines ...
	I0809 11:13:36.421444    1792 start.go:233] waiting for cluster config update ...
	I0809 11:13:36.421464    1792 start.go:242] writing updated cluster config ...
	I0809 11:13:36.422679    1792 ssh_runner.go:195] Run: rm -f paused
	I0809 11:13:36.483044    1792 start.go:599] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0809 11:13:36.487571    1792 out.go:177] * Done! kubectl is now configured to use "functional-901000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-09 18:11:06 UTC, ends at Wed 2023-08-09 18:14:27 UTC. --
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.104754296Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.508622510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.508669760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.508682176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.508690051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.543509768Z" level=info msg="shim disconnected" id=cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e namespace=moby
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.543536018Z" level=warning msg="cleaning up after shim disconnected" id=cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e namespace=moby
	Aug 09 18:14:07 functional-901000 dockerd[7127]: time="2023-08-09T18:14:07.543539893Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:14:07 functional-901000 dockerd[7121]: time="2023-08-09T18:14:07.543459101Z" level=info msg="ignoring event" container=cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.669430910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.669520951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.669560701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.669578617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:12 functional-901000 dockerd[7121]: time="2023-08-09T18:14:12.714385481Z" level=info msg="ignoring event" container=e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.714742355Z" level=info msg="shim disconnected" id=e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449 namespace=moby
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.714861272Z" level=warning msg="cleaning up after shim disconnected" id=e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449 namespace=moby
	Aug 09 18:14:12 functional-901000 dockerd[7127]: time="2023-08-09T18:14:12.714881563Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.694840171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.694899421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.694908879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.694915545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.756048532Z" level=info msg="shim disconnected" id=cfb63818398114c3803a21fa44b54eafd7c29fbad6a3b51668e71ec889dab0d2 namespace=moby
	Aug 09 18:14:21 functional-901000 dockerd[7121]: time="2023-08-09T18:14:21.755970741Z" level=info msg="ignoring event" container=cfb63818398114c3803a21fa44b54eafd7c29fbad6a3b51668e71ec889dab0d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.756267198Z" level=warning msg="cleaning up after shim disconnected" id=cfb63818398114c3803a21fa44b54eafd7c29fbad6a3b51668e71ec889dab0d2 namespace=moby
	Aug 09 18:14:21 functional-901000 dockerd[7127]: time="2023-08-09T18:14:21.756286364Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID
	cfb6381839811       72565bf5bbedf                                                                   6 seconds ago        Exited              echoserver-arm            2                   f5dcdfcc6053b
	e30b2ac8805d8       72565bf5bbedf                                                                   15 seconds ago       Exited              echoserver-arm            2                   714ad1747d3e7
	9df821e881f69       nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca   27 seconds ago       Running             myfrontend                0                   773a2d7302f55
	f55dda78d59e1       nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c   42 seconds ago       Running             nginx                     0                   2d7f3412a230d
	794826ef35a61       532e5a30e948f                                                                   About a minute ago   Running             kube-proxy                2                   495120a6f05e4
	613044467ab59       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       2                   87a3eb111a6e7
	738d5d1f59332       97e04611ad434                                                                   About a minute ago   Running             coredns                   2                   b22369d00e269
	d79a2f66e5a17       6eb63895cb67f                                                                   About a minute ago   Running             kube-scheduler            2                   b012f92d0982f
	551c6c0673d63       389f6f052cf83                                                                   About a minute ago   Running             kube-controller-manager   2                   1586378c0b2cf
	064170a9a2fa1       64aece92d6bde                                                                   About a minute ago   Running             kube-apiserver            0                   7d6b10b230aef
	28861f2a07764       24bc64e911039                                                                   About a minute ago   Running             etcd                      2                   9cc87033d3eab
	8538ac79f84fe       97e04611ad434                                                                   About a minute ago   Exited              coredns                   1                   e272a64a3990b
	f6881adcd1a3d       ba04bb24b9575                                                                   About a minute ago   Exited              storage-provisioner       1                   1dbf32a1757b1
	b4fad4545ee17       24bc64e911039                                                                   About a minute ago   Exited              etcd                      1                   e45f4dd29bafb
	0bf9b350bab21       6eb63895cb67f                                                                   About a minute ago   Exited              kube-scheduler            1                   400ab6659fb78
	1f01e80072e34       389f6f052cf83                                                                   About a minute ago   Exited              kube-controller-manager   1                   e00cde5af0aa4
	d5fd913ff98d9       532e5a30e948f                                                                   About a minute ago   Exited              kube-proxy                1                   473980efae2f1
	
	* 
	* ==> coredns [738d5d1f5933] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51307 - 23788 "HINFO IN 4530840297548302878.2255228602591843112. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007085477s
	[INFO] 10.244.0.1:40168 - 61187 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000103208s
	[INFO] 10.244.0.1:14269 - 55707 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000087667s
	[INFO] 10.244.0.1:2046 - 12224 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000027917s
	[INFO] 10.244.0.1:56852 - 48954 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001686127s
	[INFO] 10.244.0.1:36830 - 9634 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000058917s
	[INFO] 10.244.0.1:2381 - 36839 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00013725s
	
	* 
	* ==> coredns [8538ac79f84f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53369 - 49270 "HINFO IN 3006127965157080331.2030907554921897640. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005318514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-901000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-901000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=functional-901000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T11_11_22_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:11:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-901000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:14:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:14:17 +0000   Wed, 09 Aug 2023 18:11:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:14:17 +0000   Wed, 09 Aug 2023 18:11:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:14:17 +0000   Wed, 09 Aug 2023 18:11:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:14:17 +0000   Wed, 09 Aug 2023 18:11:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-901000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0cec756b289f4c458df857d70ae5070a
	  System UUID:                0cec756b289f4c458df857d70ae5070a
	  Boot ID:                    efd183a9-b241-4176-8d50-c396abf583b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-r2vqb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  default                     hello-node-connect-58d66798bb-2gr4w          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 coredns-5d78c9869d-zmmb2                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m52s
	  kube-system                 etcd-functional-901000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-apiserver-functional-901000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-functional-901000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-proxy-xqqwn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-scheduler-functional-901000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m50s              kube-proxy       
	  Normal   Starting                 69s                kube-proxy       
	  Normal   Starting                 109s               kube-proxy       
	  Normal   Starting                 3m5s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m5s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m5s               kubelet          Node functional-901000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m5s               kubelet          Node functional-901000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m5s               kubelet          Node functional-901000 status is now: NodeHasSufficientPID
	  Normal   NodeReady                3m1s               kubelet          Node functional-901000 status is now: NodeReady
	  Normal   RegisteredNode           2m53s              node-controller  Node functional-901000 event: Registered Node functional-901000 in Controller
	  Warning  ContainerGCFailed        2m5s               kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           97s                node-controller  Node functional-901000 event: Registered Node functional-901000 in Controller
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node functional-901000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node functional-901000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node functional-901000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           58s                node-controller  Node functional-901000 event: Registered Node functional-901000 in Controller
	
	* 
	* ==> dmesg <==
	* [Aug 9 18:12] systemd-fstab-generator[4302]: Ignoring "noauto" for root device
	[  +0.151114] systemd-fstab-generator[4335]: Ignoring "noauto" for root device
	[  +0.111631] systemd-fstab-generator[4346]: Ignoring "noauto" for root device
	[  +0.080210] systemd-fstab-generator[4359]: Ignoring "noauto" for root device
	[ +11.396501] systemd-fstab-generator[4914]: Ignoring "noauto" for root device
	[  +0.083901] systemd-fstab-generator[4925]: Ignoring "noauto" for root device
	[  +0.084079] systemd-fstab-generator[4936]: Ignoring "noauto" for root device
	[  +0.082758] systemd-fstab-generator[4947]: Ignoring "noauto" for root device
	[  +0.079676] systemd-fstab-generator[5019]: Ignoring "noauto" for root device
	[  +4.899252] kauditd_printk_skb: 29 callbacks suppressed
	[ +24.878546] systemd-fstab-generator[6650]: Ignoring "noauto" for root device
	[  +0.120939] systemd-fstab-generator[6683]: Ignoring "noauto" for root device
	[  +0.077048] systemd-fstab-generator[6694]: Ignoring "noauto" for root device
	[Aug 9 18:13] systemd-fstab-generator[6707]: Ignoring "noauto" for root device
	[ +11.482421] systemd-fstab-generator[7272]: Ignoring "noauto" for root device
	[  +0.086227] systemd-fstab-generator[7283]: Ignoring "noauto" for root device
	[  +0.073531] systemd-fstab-generator[7294]: Ignoring "noauto" for root device
	[  +0.078251] systemd-fstab-generator[7305]: Ignoring "noauto" for root device
	[  +0.056118] systemd-fstab-generator[7322]: Ignoring "noauto" for root device
	[  +1.046605] systemd-fstab-generator[7634]: Ignoring "noauto" for root device
	[  +4.624441] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.236108] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.849872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.786183] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Aug 9 18:14] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [28861f2a0776] <==
	* {"level":"info","ts":"2023-08-09T18:13:14.691Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T18:13:14.691Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T18:13:14.691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-08-09T18:13:14.691Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-08-09T18:13:14.691Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:13:14.691Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:13:14.693Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T18:13:14.695Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T18:13:14.695Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T18:13:14.695Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:13:14.695Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:13:16.060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-09T18:13:16.060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-09T18:13:16.060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-09T18:13:16.060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-08-09T18:13:16.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-09T18:13:16.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-08-09T18:13:16.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-08-09T18:13:16.066Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:13:16.066Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-901000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T18:13:16.066Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:13:16.069Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T18:13:16.069Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-09T18:13:16.069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T18:13:16.069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [b4fad4545ee1] <==
	* {"level":"info","ts":"2023-08-09T18:12:36.011Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T18:12:36.018Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T18:12:36.011Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:12:36.018Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T18:12:36.018Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-08-09T18:12:36.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-08-09T18:12:36.976Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:12:36.977Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:12:36.980Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T18:12:36.976Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-901000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T18:12:36.980Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T18:12:36.980Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T18:12:36.980Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-08-09T18:13:00.760Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-09T18:13:00.760Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-901000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-08-09T18:13:00.773Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-08-09T18:13:00.774Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:13:00.775Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-08-09T18:13:00.775Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-901000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  18:14:27 up 3 min,  0 users,  load average: 0.16, 0.15, 0.06
	Linux functional-901000 5.10.57 #1 SMP PREEMPT Mon Jul 31 23:05:09 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [064170a9a2fa] <==
	* I0809 18:13:16.798047       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0809 18:13:16.798256       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 18:13:16.798296       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0809 18:13:16.798316       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0809 18:13:16.798499       1 shared_informer.go:318] Caches are synced for configmaps
	I0809 18:13:16.800808       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 18:13:16.813076       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0809 18:13:16.813089       1 aggregator.go:152] initial CRD sync complete...
	I0809 18:13:16.813093       1 autoregister_controller.go:141] Starting autoregister controller
	I0809 18:13:16.813095       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0809 18:13:16.813098       1 cache.go:39] Caches are synced for autoregister controller
	I0809 18:13:17.570361       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 18:13:17.716287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0809 18:13:18.319678       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0809 18:13:18.322808       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0809 18:13:18.338556       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0809 18:13:18.347288       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 18:13:18.349749       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0809 18:13:29.527868       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0809 18:13:29.677469       1 controller.go:624] quota admission added evaluator for: endpoints
	I0809 18:13:38.163041       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.109.107.120]
	I0809 18:13:42.695743       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.102.173.206]
	I0809 18:13:53.170123       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0809 18:13:53.212269       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.103.176.185]
	I0809 18:14:06.613279       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.109.175.251]
	
	* 
	* ==> kube-controller-manager [1f01e80072e3] <==
	* I0809 18:12:50.334992       1 shared_informer.go:318] Caches are synced for GC
	I0809 18:12:50.349441       1 shared_informer.go:318] Caches are synced for disruption
	I0809 18:12:50.355604       1 shared_informer.go:318] Caches are synced for daemon sets
	I0809 18:12:50.355671       1 shared_informer.go:318] Caches are synced for job
	I0809 18:12:50.357819       1 shared_informer.go:318] Caches are synced for deployment
	I0809 18:12:50.368738       1 shared_informer.go:318] Caches are synced for HPA
	I0809 18:12:50.370931       1 shared_informer.go:318] Caches are synced for attach detach
	I0809 18:12:50.372056       1 shared_informer.go:318] Caches are synced for PVC protection
	I0809 18:12:50.373146       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0809 18:12:50.380366       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0809 18:12:50.381458       1 shared_informer.go:318] Caches are synced for stateful set
	I0809 18:12:50.384792       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:12:50.384840       1 shared_informer.go:318] Caches are synced for taint
	I0809 18:12:50.384882       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0809 18:12:50.384942       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-901000"
	I0809 18:12:50.384962       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0809 18:12:50.384987       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0809 18:12:50.384998       1 taint_manager.go:211] "Sending events to api server"
	I0809 18:12:50.385225       1 event.go:307] "Event occurred" object="functional-901000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-901000 event: Registered Node functional-901000 in Controller"
	I0809 18:12:50.431700       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:12:50.435394       1 shared_informer.go:318] Caches are synced for namespace
	I0809 18:12:50.437620       1 shared_informer.go:318] Caches are synced for service account
	I0809 18:12:50.798365       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:12:50.847480       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:12:50.847495       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [551c6c0673d6] <==
	* I0809 18:13:29.526856       1 shared_informer.go:318] Caches are synced for PV protection
	I0809 18:13:29.527954       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0809 18:13:29.528030       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0809 18:13:29.528271       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0809 18:13:29.528924       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0809 18:13:29.529732       1 shared_informer.go:318] Caches are synced for deployment
	I0809 18:13:29.530875       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0809 18:13:29.576033       1 shared_informer.go:318] Caches are synced for disruption
	I0809 18:13:29.634855       1 shared_informer.go:318] Caches are synced for PVC protection
	I0809 18:13:29.638012       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:13:29.676311       1 shared_informer.go:318] Caches are synced for expand
	I0809 18:13:29.709480       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:13:29.719621       1 shared_informer.go:318] Caches are synced for ephemeral
	I0809 18:13:29.723751       1 shared_informer.go:318] Caches are synced for attach detach
	I0809 18:13:29.724406       1 shared_informer.go:318] Caches are synced for persistent volume
	I0809 18:13:29.724488       1 shared_informer.go:318] Caches are synced for stateful set
	I0809 18:13:30.033575       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:13:30.033597       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0809 18:13:30.055919       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:13:47.583844       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0809 18:13:47.584027       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0809 18:13:53.176638       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0809 18:13:53.184669       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-2gr4w"
	I0809 18:14:06.573617       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0809 18:14:06.575707       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-r2vqb"
	
	* 
	* ==> kube-proxy [794826ef35a6] <==
	* I0809 18:13:18.204662       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0809 18:13:18.204713       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0809 18:13:18.204727       1 server_others.go:554] "Using iptables proxy"
	I0809 18:13:18.228971       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0809 18:13:18.228983       1 server_others.go:192] "Using iptables Proxier"
	I0809 18:13:18.229000       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 18:13:18.229176       1 server.go:658] "Version info" version="v1.27.4"
	I0809 18:13:18.229180       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:13:18.231621       1 config.go:188] "Starting service config controller"
	I0809 18:13:18.231632       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 18:13:18.231642       1 config.go:97] "Starting endpoint slice config controller"
	I0809 18:13:18.231643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 18:13:18.234087       1 config.go:315] "Starting node config controller"
	I0809 18:13:18.234092       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 18:13:18.331843       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 18:13:18.331868       1 shared_informer.go:318] Caches are synced for service config
	I0809 18:13:18.334304       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [d5fd913ff98d] <==
	* I0809 18:12:37.685231       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0809 18:12:37.685281       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0809 18:12:37.685291       1 server_others.go:554] "Using iptables proxy"
	I0809 18:12:37.705197       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0809 18:12:37.705285       1 server_others.go:192] "Using iptables Proxier"
	I0809 18:12:37.705342       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 18:12:37.705567       1 server.go:658] "Version info" version="v1.27.4"
	I0809 18:12:37.705574       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:12:37.705950       1 config.go:188] "Starting service config controller"
	I0809 18:12:37.705959       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 18:12:37.705969       1 config.go:97] "Starting endpoint slice config controller"
	I0809 18:12:37.705970       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 18:12:37.706157       1 config.go:315] "Starting node config controller"
	I0809 18:12:37.706159       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 18:12:37.806595       1 shared_informer.go:318] Caches are synced for node config
	I0809 18:12:37.806635       1 shared_informer.go:318] Caches are synced for service config
	I0809 18:12:37.806665       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0bf9b350bab2] <==
	* I0809 18:12:36.241602       1 serving.go:348] Generated self-signed cert in-memory
	W0809 18:12:37.658680       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0809 18:12:37.658696       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0809 18:12:37.658701       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0809 18:12:37.658703       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0809 18:12:37.685184       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0809 18:12:37.685659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:12:37.686643       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0809 18:12:37.686836       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0809 18:12:37.686845       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:12:37.686899       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0809 18:12:37.787424       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:13:00.771147       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0809 18:13:00.771170       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0809 18:13:00.771219       1 scheduling_queue.go:1139] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0809 18:13:00.771240       1 run.go:74] "command failed" err="finished without leader elect"
	I0809 18:13:00.771243       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [d79a2f66e5a1] <==
	* I0809 18:13:14.819974       1 serving.go:348] Generated self-signed cert in-memory
	W0809 18:13:16.746683       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0809 18:13:16.746782       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0809 18:13:16.746821       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0809 18:13:16.746836       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0809 18:13:16.760293       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0809 18:13:16.760304       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:13:16.761701       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0809 18:13:16.761777       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0809 18:13:16.761785       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:13:16.761792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0809 18:13:16.862626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-09 18:11:06 UTC, ends at Wed 2023-08-09 18:14:27 UTC. --
	Aug 09 18:14:01 functional-901000 kubelet[7640]: I0809 18:14:01.358935    7640 scope.go:115] "RemoveContainer" containerID="0b052057aeaeb9d9afdb314ebcb2d8e77dc0964af18a9f4ec0ddc0bd4ccc9026"
	Aug 09 18:14:01 functional-901000 kubelet[7640]: E0809 18:14:01.359215    7640 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-2gr4w_default(c4c18db8-f524-4c1b-8037-0b0e7861d4e7)\"" pod="default/hello-node-connect-58d66798bb-2gr4w" podUID=c4c18db8-f524-4c1b-8037-0b0e7861d4e7
	Aug 09 18:14:01 functional-901000 kubelet[7640]: I0809 18:14:01.382893    7640 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.63161321 podCreationTimestamp="2023-08-09 18:13:59 +0000 UTC" firstStartedPulling="2023-08-09 18:13:59.869397339 +0000 UTC m=+46.318310274" lastFinishedPulling="2023-08-09 18:14:00.620638256 +0000 UTC m=+47.069551274" observedRunningTime="2023-08-09 18:14:01.382619419 +0000 UTC m=+47.831532395" watchObservedRunningTime="2023-08-09 18:14:01.38285421 +0000 UTC m=+47.831767145"
	Aug 09 18:14:06 functional-901000 kubelet[7640]: E0809 18:14:06.535535    7640 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43380->127.0.0.1:32911: write tcp 127.0.0.1:43380->127.0.0.1:32911: write: broken pipe
	Aug 09 18:14:06 functional-901000 kubelet[7640]: I0809 18:14:06.579230    7640 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:14:06 functional-901000 kubelet[7640]: I0809 18:14:06.729757    7640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjgxn\" (UniqueName: \"kubernetes.io/projected/c1ad0be7-b3ff-4b79-bb48-ae2752acf50a-kube-api-access-pjgxn\") pod \"hello-node-7b684b55f9-r2vqb\" (UID: \"c1ad0be7-b3ff-4b79-bb48-ae2752acf50a\") " pod="default/hello-node-7b684b55f9-r2vqb"
	Aug 09 18:14:07 functional-901000 kubelet[7640]: I0809 18:14:07.473569    7640 scope.go:115] "RemoveContainer" containerID="6a818483aca9e8c6eb443edd2a4de8e462bba3f03feee6e13371d566c9202075"
	Aug 09 18:14:08 functional-901000 kubelet[7640]: I0809 18:14:08.520041    7640 scope.go:115] "RemoveContainer" containerID="6a818483aca9e8c6eb443edd2a4de8e462bba3f03feee6e13371d566c9202075"
	Aug 09 18:14:08 functional-901000 kubelet[7640]: I0809 18:14:08.520997    7640 scope.go:115] "RemoveContainer" containerID="cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e"
	Aug 09 18:14:08 functional-901000 kubelet[7640]: E0809 18:14:08.521324    7640 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-r2vqb_default(c1ad0be7-b3ff-4b79-bb48-ae2752acf50a)\"" pod="default/hello-node-7b684b55f9-r2vqb" podUID=c1ad0be7-b3ff-4b79-bb48-ae2752acf50a
	Aug 09 18:14:12 functional-901000 kubelet[7640]: I0809 18:14:12.624238    7640 scope.go:115] "RemoveContainer" containerID="0b052057aeaeb9d9afdb314ebcb2d8e77dc0964af18a9f4ec0ddc0bd4ccc9026"
	Aug 09 18:14:13 functional-901000 kubelet[7640]: I0809 18:14:13.620811    7640 scope.go:115] "RemoveContainer" containerID="0b052057aeaeb9d9afdb314ebcb2d8e77dc0964af18a9f4ec0ddc0bd4ccc9026"
	Aug 09 18:14:13 functional-901000 kubelet[7640]: I0809 18:14:13.620961    7640 scope.go:115] "RemoveContainer" containerID="e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449"
	Aug 09 18:14:13 functional-901000 kubelet[7640]: E0809 18:14:13.621080    7640 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-2gr4w_default(c4c18db8-f524-4c1b-8037-0b0e7861d4e7)\"" pod="default/hello-node-connect-58d66798bb-2gr4w" podUID=c4c18db8-f524-4c1b-8037-0b0e7861d4e7
	Aug 09 18:14:13 functional-901000 kubelet[7640]: E0809 18:14:13.632362    7640 iptables.go:575] "Could not set up iptables canary" err=<
	Aug 09 18:14:13 functional-901000 kubelet[7640]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 09 18:14:13 functional-901000 kubelet[7640]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 09 18:14:13 functional-901000 kubelet[7640]:  > table=nat chain=KUBE-KUBELET-CANARY
	Aug 09 18:14:13 functional-901000 kubelet[7640]: I0809 18:14:13.709266    7640 scope.go:115] "RemoveContainer" containerID="410b8d7c0d582de6d1f02d7d51f3af0a5893b01dbba8dcb3d8a79bae84be2264"
	Aug 09 18:14:21 functional-901000 kubelet[7640]: I0809 18:14:21.626011    7640 scope.go:115] "RemoveContainer" containerID="cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e"
	Aug 09 18:14:22 functional-901000 kubelet[7640]: I0809 18:14:22.782002    7640 scope.go:115] "RemoveContainer" containerID="cd3767774236a886a9035e90d9a3e8d31f7e0628cdc7ea3cc9e7a7b8def0c94e"
	Aug 09 18:14:22 functional-901000 kubelet[7640]: I0809 18:14:22.782394    7640 scope.go:115] "RemoveContainer" containerID="cfb63818398114c3803a21fa44b54eafd7c29fbad6a3b51668e71ec889dab0d2"
	Aug 09 18:14:22 functional-901000 kubelet[7640]: E0809 18:14:22.782693    7640 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-r2vqb_default(c1ad0be7-b3ff-4b79-bb48-ae2752acf50a)\"" pod="default/hello-node-7b684b55f9-r2vqb" podUID=c1ad0be7-b3ff-4b79-bb48-ae2752acf50a
	Aug 09 18:14:25 functional-901000 kubelet[7640]: I0809 18:14:25.624650    7640 scope.go:115] "RemoveContainer" containerID="e30b2ac8805d801720118d9e81cdaea0c958b53972778292b27c4cee29efd449"
	Aug 09 18:14:25 functional-901000 kubelet[7640]: E0809 18:14:25.628234    7640 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-2gr4w_default(c4c18db8-f524-4c1b-8037-0b0e7861d4e7)\"" pod="default/hello-node-connect-58d66798bb-2gr4w" podUID=c4c18db8-f524-4c1b-8037-0b0e7861d4e7
	
	* 
	* ==> storage-provisioner [613044467ab5] <==
	* I0809 18:13:18.259864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0809 18:13:18.265107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0809 18:13:18.265125       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0809 18:13:35.678858       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0809 18:13:35.679532       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-901000_cb99e09f-5cec-4af3-868c-5fd818e94600!
	I0809 18:13:35.681410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"360246c8-c662-4d0e-83c2-1edab72be5bb", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-901000_cb99e09f-5cec-4af3-868c-5fd818e94600 became leader
	I0809 18:13:35.780517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-901000_cb99e09f-5cec-4af3-868c-5fd818e94600!
	I0809 18:13:47.584533       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0809 18:13:47.584568       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6af88087-e7b6-451b-91c7-000e03617834 364 0 2023-08-09 18:11:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-08-09 18:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-cd921c84-e385-4539-bb5c-625dae1c07cf &PersistentVolumeClaim{ObjectMeta:{myclaim  default  cd921c84-e385-4539-bb5c-625dae1c07cf 682 0 2023-08-09 18:13:47 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-08-09 18:13:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-08-09 18:13:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0809 18:13:47.585104       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cd921c84-e385-4539-bb5c-625dae1c07cf", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0809 18:13:47.585887       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-cd921c84-e385-4539-bb5c-625dae1c07cf" provisioned
	I0809 18:13:47.585951       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0809 18:13:47.585954       1 volume_store.go:212] Trying to save persistentvolume "pvc-cd921c84-e385-4539-bb5c-625dae1c07cf"
	I0809 18:13:47.590994       1 volume_store.go:219] persistentvolume "pvc-cd921c84-e385-4539-bb5c-625dae1c07cf" saved
	I0809 18:13:47.591201       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cd921c84-e385-4539-bb5c-625dae1c07cf", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cd921c84-e385-4539-bb5c-625dae1c07cf
	
	* 
	* ==> storage-provisioner [f6881adcd1a3] <==
	* I0809 18:12:36.041809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0809 18:12:37.697210       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0809 18:12:37.697238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0809 18:12:55.104082       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0809 18:12:55.104170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-901000_fcd9b2ce-6208-4488-86ed-14f69a24306e!
	I0809 18:12:55.104643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"360246c8-c662-4d0e-83c2-1edab72be5bb", APIVersion:"v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-901000_fcd9b2ce-6208-4488-86ed-14f69a24306e became leader
	I0809 18:12:55.205322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-901000_fcd9b2ce-6208-4488-86ed-14f69a24306e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-901000 -n functional-901000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-901000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-901000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-901000 image ls --format short --alsologtostderr:
I0809 11:14:46.368943    2175 out.go:296] Setting OutFile to fd 1 ...
I0809 11:14:46.369069    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.369072    2175 out.go:309] Setting ErrFile to fd 2...
I0809 11:14:46.369075    2175 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.369196    2175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:14:46.369612    2175 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.369669    2175 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
W0809 11:14:46.369898    2175 cache_images.go:695] error getting status for functional-901000: state: connect: dial unix /Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/monitor: connect: connection refused
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-340000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-340000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in a8c3a09960a8
	Removing intermediate container a8c3a09960a8
	 ---> a3ce8dd81880
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in c992a9740773
	Removing intermediate container c992a9740773
	 ---> aa8e580123b2
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 25b38d0dbf47
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-340000 -n image-340000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-340000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| docker-env     | functional-901000 docker-env                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| ssh            | functional-901000 ssh sudo cat                           | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | /etc/test/nested/copy/1410/hosts                         |                   |         |         |                     |                     |
	| update-context | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-901000 image ls                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image          | functional-901000 image load --daemon                    | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-901000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image ls                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image          | functional-901000 image save                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-901000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image rm                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-901000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image ls                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image          | functional-901000 image load                             | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image ls                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image          | functional-901000 image save --daemon                    | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-901000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-901000 ssh pgrep                              | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image build -t                         | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | localhost/my-image:functional-901000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-901000                                        | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-901000 image ls                               | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| delete         | -p functional-901000                                     | functional-901000 | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| start          | -p image-340000 --driver=qemu2                           | image-340000      | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:15 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-340000      | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-340000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-340000      | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-340000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:14:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:14:48.490594    2202 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:14:48.490709    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:48.490711    2202 out.go:309] Setting ErrFile to fd 2...
	I0809 11:14:48.490713    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:48.490820    2202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:14:48.491821    2202 out.go:303] Setting JSON to false
	I0809 11:14:48.508097    2202 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":862,"bootTime":1691604026,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:14:48.508154    2202 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:14:48.511058    2202 out.go:177] * [image-340000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:14:48.523060    2202 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:14:48.519217    2202 notify.go:220] Checking for updates...
	I0809 11:14:48.529106    2202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:14:48.532013    2202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:14:48.535097    2202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:14:48.538123    2202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:14:48.541113    2202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:14:48.544217    2202 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:14:48.548116    2202 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:14:48.555088    2202 start.go:298] selected driver: qemu2
	I0809 11:14:48.555092    2202 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:14:48.555097    2202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:14:48.555156    2202 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:14:48.558049    2202 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:14:48.563360    2202 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0809 11:14:48.563466    2202 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:14:48.563484    2202 cni.go:84] Creating CNI manager for ""
	I0809 11:14:48.563489    2202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:14:48.563494    2202 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:14:48.563498    2202 start_flags.go:319] config:
	{Name:image-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:image-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:14:48.568012    2202 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:14:48.571144    2202 out.go:177] * Starting control plane node image-340000 in cluster image-340000
	I0809 11:14:48.579093    2202 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:14:48.579109    2202 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:14:48.579119    2202 cache.go:57] Caching tarball of preloaded images
	I0809 11:14:48.579182    2202 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:14:48.579186    2202 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:14:48.579402    2202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/config.json ...
	I0809 11:14:48.579413    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/config.json: {Name:mk3a24a8c3edcfe98a7444f47400f7acf67c1574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:14:48.579593    2202 start.go:365] acquiring machines lock for image-340000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:14:48.579619    2202 start.go:369] acquired machines lock for "image-340000" in 23.334µs
	I0809 11:14:48.579627    2202 start.go:93] Provisioning new machine with config: &{Name:image-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.4 ClusterName:image-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:14:48.579654    2202 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:14:48.588119    2202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0809 11:14:48.609469    2202 start.go:159] libmachine.API.Create for "image-340000" (driver="qemu2")
	I0809 11:14:48.609503    2202 client.go:168] LocalClient.Create starting
	I0809 11:14:48.609567    2202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:14:48.609593    2202 main.go:141] libmachine: Decoding PEM data...
	I0809 11:14:48.609601    2202 main.go:141] libmachine: Parsing certificate...
	I0809 11:14:48.609638    2202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:14:48.609655    2202 main.go:141] libmachine: Decoding PEM data...
	I0809 11:14:48.609663    2202 main.go:141] libmachine: Parsing certificate...
	I0809 11:14:48.609985    2202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:14:48.904515    2202 main.go:141] libmachine: Creating SSH key...
	I0809 11:14:48.945640    2202 main.go:141] libmachine: Creating Disk image...
	I0809 11:14:48.945644    2202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:14:48.945777    2202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2
	I0809 11:14:48.961693    2202 main.go:141] libmachine: STDOUT: 
	I0809 11:14:48.961702    2202 main.go:141] libmachine: STDERR: 
	I0809 11:14:48.961759    2202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2 +20000M
	I0809 11:14:48.968840    2202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:14:48.968850    2202 main.go:141] libmachine: STDERR: 
	I0809 11:14:48.968868    2202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2
	I0809 11:14:48.968874    2202 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:14:48.968912    2202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:84:56:32:32:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/disk.qcow2
	I0809 11:14:49.010077    2202 main.go:141] libmachine: STDOUT: 
	I0809 11:14:49.010096    2202 main.go:141] libmachine: STDERR: 
	I0809 11:14:49.010099    2202 main.go:141] libmachine: Attempt 0
	I0809 11:14:49.010115    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:49.010177    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:49.010195    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:49.010201    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:49.010205    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:14:51.012340    2202 main.go:141] libmachine: Attempt 1
	I0809 11:14:51.012383    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:51.012717    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:51.012760    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:51.012788    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:51.012824    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:14:53.014665    2202 main.go:141] libmachine: Attempt 2
	I0809 11:14:53.014680    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:53.014795    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:53.014805    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:53.014810    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:53.014814    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:14:55.016872    2202 main.go:141] libmachine: Attempt 3
	I0809 11:14:55.016910    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:55.016998    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:55.017008    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:55.017013    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:55.017017    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:14:57.019006    2202 main.go:141] libmachine: Attempt 4
	I0809 11:14:57.019016    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:57.019082    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:57.019091    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:57.019106    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:57.019121    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:14:59.021118    2202 main.go:141] libmachine: Attempt 5
	I0809 11:14:59.021136    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:14:59.021210    2202 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0809 11:14:59.021220    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:14:59.021236    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:14:59.021241    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:01.023312    2202 main.go:141] libmachine: Attempt 6
	I0809 11:15:01.023346    2202 main.go:141] libmachine: Searching for d2:84:56:32:32:4d in /var/db/dhcpd_leases ...
	I0809 11:15:01.023619    2202 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:01.023649    2202 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:01.023659    2202 main.go:141] libmachine: Found match: d2:84:56:32:32:4d
	I0809 11:15:01.023693    2202 main.go:141] libmachine: IP: 192.168.105.5
	I0809 11:15:01.023705    2202 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0809 11:15:03.043137    2202 machine.go:88] provisioning docker machine ...
	I0809 11:15:03.043224    2202 buildroot.go:166] provisioning hostname "image-340000"
	I0809 11:15:03.043447    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.044435    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.044450    2202 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-340000 && echo "image-340000" | sudo tee /etc/hostname
	I0809 11:15:03.147191    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: image-340000
	
	I0809 11:15:03.147308    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.147815    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.147827    2202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 11:15:03.228312    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 11:15:03.228328    2202 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17011-995/.minikube CaCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17011-995/.minikube}
	I0809 11:15:03.228341    2202 buildroot.go:174] setting up certificates
	I0809 11:15:03.228348    2202 provision.go:83] configureAuth start
	I0809 11:15:03.228353    2202 provision.go:138] copyHostCerts
	I0809 11:15:03.228509    2202 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem, removing ...
	I0809 11:15:03.228516    2202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem
	I0809 11:15:03.228723    2202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem (1123 bytes)
	I0809 11:15:03.229007    2202 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem, removing ...
	I0809 11:15:03.229009    2202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem
	I0809 11:15:03.229071    2202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem (1679 bytes)
	I0809 11:15:03.229228    2202 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem, removing ...
	I0809 11:15:03.229230    2202 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem
	I0809 11:15:03.229293    2202 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem (1082 bytes)
	I0809 11:15:03.229413    2202 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem org=jenkins.image-340000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-340000]
	I0809 11:15:03.377115    2202 provision.go:172] copyRemoteCerts
	I0809 11:15:03.377148    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 11:15:03.377155    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:03.416594    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 11:15:03.424024    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 11:15:03.431466    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0809 11:15:03.438839    2202 provision.go:86] duration metric: configureAuth took 210.493458ms
	I0809 11:15:03.438855    2202 buildroot.go:189] setting minikube options for container-runtime
	I0809 11:15:03.438965    2202 config.go:182] Loaded profile config "image-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:15:03.439012    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.439230    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.439233    2202 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0809 11:15:03.506204    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0809 11:15:03.506209    2202 buildroot.go:70] root file system type: tmpfs
	I0809 11:15:03.506288    2202 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0809 11:15:03.506342    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.506602    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.506642    2202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0809 11:15:03.578995    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0809 11:15:03.579047    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.579306    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.579314    2202 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0809 11:15:03.905436    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0809 11:15:03.905454    2202 machine.go:91] provisioned docker machine in 862.331083ms
	I0809 11:15:03.905459    2202 client.go:171] LocalClient.Create took 15.296474459s
	I0809 11:15:03.905473    2202 start.go:167] duration metric: libmachine.API.Create for "image-340000" took 15.296535s
	I0809 11:15:03.905476    2202 start.go:300] post-start starting for "image-340000" (driver="qemu2")
	I0809 11:15:03.905480    2202 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 11:15:03.905558    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 11:15:03.905569    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:03.940526    2202 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 11:15:03.942008    2202 info.go:137] Remote host: Buildroot 2021.02.12
	I0809 11:15:03.942012    2202 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/addons for local assets ...
	I0809 11:15:03.942073    2202 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/files for local assets ...
	I0809 11:15:03.942173    2202 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem -> 14102.pem in /etc/ssl/certs
	I0809 11:15:03.942284    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 11:15:03.950896    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:15:03.958806    2202 start.go:303] post-start completed in 53.324ms
	I0809 11:15:03.959254    2202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/config.json ...
	I0809 11:15:03.959409    2202 start.go:128] duration metric: createHost completed in 15.380275792s
	I0809 11:15:03.959451    2202 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:03.959677    2202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f01590] 0x100f03ff0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0809 11:15:03.959680    2202 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0809 11:15:04.026248    2202 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691604903.615119919
	
	I0809 11:15:04.026253    2202 fix.go:206] guest clock: 1691604903.615119919
	I0809 11:15:04.026257    2202 fix.go:219] Guest: 2023-08-09 11:15:03.615119919 -0700 PDT Remote: 2023-08-09 11:15:03.959413 -0700 PDT m=+15.488471084 (delta=-344.293081ms)
	I0809 11:15:04.026267    2202 fix.go:190] guest clock delta is within tolerance: -344.293081ms
	I0809 11:15:04.026269    2202 start.go:83] releasing machines lock for "image-340000", held for 15.447172958s
	I0809 11:15:04.026552    2202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 11:15:04.026553    2202 ssh_runner.go:195] Run: cat /version.json
	I0809 11:15:04.026559    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:04.026570    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:04.062602    2202 ssh_runner.go:195] Run: systemctl --version
	I0809 11:15:04.104433    2202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0809 11:15:04.106487    2202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0809 11:15:04.106513    2202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 11:15:04.111849    2202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0809 11:15:04.111854    2202 start.go:466] detecting cgroup driver to use...
	I0809 11:15:04.111935    2202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:15:04.117980    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0809 11:15:04.121241    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0809 11:15:04.124316    2202 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0809 11:15:04.124336    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0809 11:15:04.127685    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:15:04.131494    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0809 11:15:04.134917    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:15:04.137843    2202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 11:15:04.140798    2202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0809 11:15:04.144156    2202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 11:15:04.147343    2202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 11:15:04.150465    2202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:04.213698    2202 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0809 11:15:04.223441    2202 start.go:466] detecting cgroup driver to use...
	I0809 11:15:04.223504    2202 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0809 11:15:04.228864    2202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:15:04.233675    2202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 11:15:04.241146    2202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:15:04.245994    2202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:15:04.250552    2202 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0809 11:15:04.279548    2202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:15:04.284573    2202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:15:04.289857    2202 ssh_runner.go:195] Run: which cri-dockerd
	I0809 11:15:04.291079    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0809 11:15:04.293703    2202 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0809 11:15:04.298373    2202 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0809 11:15:04.357295    2202 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0809 11:15:04.417903    2202 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0809 11:15:04.417911    2202 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0809 11:15:04.422972    2202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:04.481477    2202 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:15:05.655231    2202 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.17378175s)
	I0809 11:15:05.655294    2202 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:15:05.718160    2202 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0809 11:15:05.776202    2202 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0809 11:15:05.838789    2202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:05.896942    2202 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0809 11:15:05.903875    2202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:05.968891    2202 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0809 11:15:05.992253    2202 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0809 11:15:05.992312    2202 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0809 11:15:05.994566    2202 start.go:534] Will wait 60s for crictl version
	I0809 11:15:05.994611    2202 ssh_runner.go:195] Run: which crictl
	I0809 11:15:05.996076    2202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 11:15:06.011839    2202 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0809 11:15:06.011907    2202 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:15:06.021858    2202 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:15:06.040222    2202 out.go:204] * Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
	I0809 11:15:06.040333    2202 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0809 11:15:06.041663    2202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:15:06.045154    2202 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:15:06.045195    2202 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:15:06.050486    2202 docker.go:636] Got preloaded images: 
	I0809 11:15:06.050490    2202 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.4 wasn't preloaded
	I0809 11:15:06.050530    2202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:15:06.053293    2202 ssh_runner.go:195] Run: which lz4
	I0809 11:15:06.054594    2202 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0809 11:15:06.055975    2202 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0809 11:15:06.055987    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343658271 bytes)
	I0809 11:15:07.352894    2202 docker.go:600] Took 1.298383 seconds to copy over tarball
	I0809 11:15:07.352945    2202 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0809 11:15:08.383646    2202 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.030713875s)
	I0809 11:15:08.383654    2202 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0809 11:15:08.399287    2202 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:15:08.402574    2202 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0809 11:15:08.407901    2202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:08.470924    2202 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:15:09.932637    2202 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.461751833s)
	I0809 11:15:09.932716    2202 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:15:09.938641    2202 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.4
	registry.k8s.io/kube-scheduler:v1.27.4
	registry.k8s.io/kube-controller-manager:v1.27.4
	registry.k8s.io/kube-proxy:v1.27.4
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0809 11:15:09.938647    2202 cache_images.go:84] Images are preloaded, skipping loading
	I0809 11:15:09.938708    2202 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0809 11:15:09.946271    2202 cni.go:84] Creating CNI manager for ""
	I0809 11:15:09.946277    2202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:15:09.946292    2202 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 11:15:09.946301    2202 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-340000 NodeName:image-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 11:15:09.946377    2202 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-340000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 11:15:09.946413    2202 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:image-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 11:15:09.946461    2202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 11:15:09.949447    2202 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 11:15:09.949469    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 11:15:09.952179    2202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0809 11:15:09.957224    2202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 11:15:09.962002    2202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0809 11:15:09.967050    2202 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0809 11:15:09.968381    2202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:15:09.972276    2202 certs.go:56] Setting up /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000 for IP: 192.168.105.5
	I0809 11:15:09.972282    2202 certs.go:190] acquiring lock for shared ca certs: {Name:mkc408918270161d0a558be6b69aedd9ebd20eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:09.972414    2202 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key
	I0809 11:15:09.972450    2202 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key
	I0809 11:15:09.972473    2202 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.key
	I0809 11:15:09.972478    2202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.crt with IP's: []
	I0809 11:15:10.012112    2202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.crt ...
	I0809 11:15:10.012114    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.crt: {Name:mk880473ab5870342d1fff2eb6a478690852dc7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.012322    2202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.key ...
	I0809 11:15:10.012324    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/client.key: {Name:mk73411ab219c096caf7fa7e97fd1e4cf5517e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.012439    2202 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key.e69b33ca
	I0809 11:15:10.012444    2202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 11:15:10.122600    2202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt.e69b33ca ...
	I0809 11:15:10.122602    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt.e69b33ca: {Name:mk941a746060f49ee2a092e83a2c61ee255e463a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.122737    2202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key.e69b33ca ...
	I0809 11:15:10.122739    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key.e69b33ca: {Name:mk06fd1560feed6c56d52ddf83fe982b22d78701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.122836    2202 certs.go:337] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt
	I0809 11:15:10.123075    2202 certs.go:341] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key
	I0809 11:15:10.123186    2202 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.key
	I0809 11:15:10.123193    2202 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.crt with IP's: []
	I0809 11:15:10.227736    2202 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.crt ...
	I0809 11:15:10.227739    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.crt: {Name:mkc3efd40c86b7742f6d3f58afb3e70160bd4a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.227898    2202 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.key ...
	I0809 11:15:10.227900    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.key: {Name:mka12777babffd997afe810905e5d02b90a10660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:10.228161    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem (1338 bytes)
	W0809 11:15:10.228191    2202 certs.go:433] ignoring /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410_empty.pem, impossibly tiny 0 bytes
	I0809 11:15:10.228198    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem (1679 bytes)
	I0809 11:15:10.228226    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem (1082 bytes)
	I0809 11:15:10.228247    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem (1123 bytes)
	I0809 11:15:10.228268    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem (1679 bytes)
	I0809 11:15:10.228314    2202 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:15:10.228707    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 11:15:10.236094    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0809 11:15:10.243202    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 11:15:10.250168    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/image-340000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0809 11:15:10.256888    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 11:15:10.263920    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0809 11:15:10.271439    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 11:15:10.278353    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0809 11:15:10.285397    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem --> /usr/share/ca-certificates/1410.pem (1338 bytes)
	I0809 11:15:10.292144    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /usr/share/ca-certificates/14102.pem (1708 bytes)
	I0809 11:15:10.299665    2202 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 11:15:10.306958    2202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 11:15:10.312141    2202 ssh_runner.go:195] Run: openssl version
	I0809 11:15:10.314305    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1410.pem && ln -fs /usr/share/ca-certificates/1410.pem /etc/ssl/certs/1410.pem"
	I0809 11:15:10.317675    2202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1410.pem
	I0809 11:15:10.319203    2202 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:10 /usr/share/ca-certificates/1410.pem
	I0809 11:15:10.319225    2202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1410.pem
	I0809 11:15:10.321185    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1410.pem /etc/ssl/certs/51391683.0"
	I0809 11:15:10.324366    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14102.pem && ln -fs /usr/share/ca-certificates/14102.pem /etc/ssl/certs/14102.pem"
	I0809 11:15:10.327839    2202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14102.pem
	I0809 11:15:10.329479    2202 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:10 /usr/share/ca-certificates/14102.pem
	I0809 11:15:10.329497    2202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14102.pem
	I0809 11:15:10.331948    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14102.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 11:15:10.335449    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 11:15:10.338862    2202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:10.340335    2202 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:10.340351    2202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:10.342139    2202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 11:15:10.344891    2202 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 11:15:10.346321    2202 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 11:15:10.346345    2202 kubeadm.go:404] StartCluster: {Name:image-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.4 ClusterName:image-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:15:10.346412    2202 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0809 11:15:10.352185    2202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 11:15:10.355641    2202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 11:15:10.358708    2202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 11:15:10.361296    2202 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 11:15:10.361307    2202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0809 11:15:10.384200    2202 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0809 11:15:10.384234    2202 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 11:15:10.439130    2202 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 11:15:10.439186    2202 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 11:15:10.439236    2202 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 11:15:10.498402    2202 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 11:15:10.508553    2202 out.go:204]   - Generating certificates and keys ...
	I0809 11:15:10.508589    2202 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 11:15:10.508642    2202 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 11:15:10.610136    2202 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 11:15:10.684848    2202 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 11:15:10.803860    2202 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 11:15:10.859935    2202 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 11:15:10.932592    2202 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 11:15:10.932646    2202 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-340000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0809 11:15:11.191585    2202 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 11:15:11.191659    2202 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-340000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0809 11:15:11.254123    2202 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 11:15:11.330988    2202 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 11:15:11.404373    2202 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 11:15:11.404398    2202 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 11:15:11.542150    2202 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 11:15:11.663681    2202 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 11:15:11.761065    2202 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 11:15:11.851435    2202 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 11:15:11.858169    2202 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 11:15:11.858259    2202 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 11:15:11.858276    2202 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 11:15:11.921910    2202 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 11:15:11.931062    2202 out.go:204]   - Booting up control plane ...
	I0809 11:15:11.931118    2202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 11:15:11.931157    2202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 11:15:11.931189    2202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 11:15:11.931231    2202 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 11:15:11.931329    2202 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 11:15:15.931095    2202 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003991 seconds
	I0809 11:15:15.931223    2202 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 11:15:15.943599    2202 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 11:15:16.464967    2202 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 11:15:16.465184    2202 kubeadm.go:322] [mark-control-plane] Marking the node image-340000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0809 11:15:16.976384    2202 kubeadm.go:322] [bootstrap-token] Using token: bbzp3s.rku26yol4yyfj660
	I0809 11:15:16.980336    2202 out.go:204]   - Configuring RBAC rules ...
	I0809 11:15:16.980453    2202 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 11:15:16.982019    2202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 11:15:16.988488    2202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 11:15:16.989416    2202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 11:15:16.990992    2202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 11:15:16.992404    2202 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 11:15:16.997445    2202 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 11:15:17.163719    2202 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 11:15:17.383991    2202 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 11:15:17.384651    2202 kubeadm.go:322] 
	I0809 11:15:17.384679    2202 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 11:15:17.384681    2202 kubeadm.go:322] 
	I0809 11:15:17.384713    2202 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 11:15:17.384714    2202 kubeadm.go:322] 
	I0809 11:15:17.384724    2202 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 11:15:17.384751    2202 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 11:15:17.384784    2202 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 11:15:17.384788    2202 kubeadm.go:322] 
	I0809 11:15:17.384816    2202 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0809 11:15:17.384818    2202 kubeadm.go:322] 
	I0809 11:15:17.384848    2202 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0809 11:15:17.384850    2202 kubeadm.go:322] 
	I0809 11:15:17.384876    2202 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 11:15:17.384912    2202 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 11:15:17.384939    2202 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 11:15:17.384941    2202 kubeadm.go:322] 
	I0809 11:15:17.384982    2202 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 11:15:17.385018    2202 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 11:15:17.385019    2202 kubeadm.go:322] 
	I0809 11:15:17.385059    2202 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token bbzp3s.rku26yol4yyfj660 \
	I0809 11:15:17.385114    2202 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 \
	I0809 11:15:17.385125    2202 kubeadm.go:322] 	--control-plane 
	I0809 11:15:17.385127    2202 kubeadm.go:322] 
	I0809 11:15:17.385161    2202 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 11:15:17.385162    2202 kubeadm.go:322] 
	I0809 11:15:17.385202    2202 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token bbzp3s.rku26yol4yyfj660 \
	I0809 11:15:17.385258    2202 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 
	I0809 11:15:17.385310    2202 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 11:15:17.385314    2202 cni.go:84] Creating CNI manager for ""
	I0809 11:15:17.385320    2202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:15:17.392131    2202 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0809 11:15:17.395348    2202 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0809 11:15:17.398384    2202 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0809 11:15:17.403117    2202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 11:15:17.403171    2202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:15:17.403171    2202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=image-340000 minikube.k8s.io/updated_at=2023_08_09T11_15_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:15:17.456880    2202 kubeadm.go:1081] duration metric: took 53.74475ms to wait for elevateKubeSystemPrivileges.
	I0809 11:15:17.468189    2202 ops.go:34] apiserver oom_adj: -16
	I0809 11:15:17.468194    2202 kubeadm.go:406] StartCluster complete in 7.122092333s
	I0809 11:15:17.468203    2202 settings.go:142] acquiring lock: {Name:mkccab662ae5271e860bc4bdf3048d54a609848d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:17.468295    2202 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:15:17.468603    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/kubeconfig: {Name:mk08b0de0097dc34716acdd012f0f4571979d434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:17.468771    2202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 11:15:17.468816    2202 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0809 11:15:17.468853    2202 addons.go:69] Setting storage-provisioner=true in profile "image-340000"
	I0809 11:15:17.468866    2202 addons.go:231] Setting addon storage-provisioner=true in "image-340000"
	I0809 11:15:17.468882    2202 config.go:182] Loaded profile config "image-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:15:17.468886    2202 host.go:66] Checking if "image-340000" exists ...
	I0809 11:15:17.468907    2202 addons.go:69] Setting default-storageclass=true in profile "image-340000"
	I0809 11:15:17.468930    2202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-340000"
	I0809 11:15:17.474235    2202 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:17.478282    2202 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:15:17.478286    2202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 11:15:17.478293    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:17.484563    2202 addons.go:231] Setting addon default-storageclass=true in "image-340000"
	I0809 11:15:17.484578    2202 host.go:66] Checking if "image-340000" exists ...
	I0809 11:15:17.485370    2202 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 11:15:17.485374    2202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 11:15:17.485380    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/image-340000/id_rsa Username:docker}
	I0809 11:15:17.487201    2202 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-340000" context rescaled to 1 replicas
	I0809 11:15:17.487212    2202 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:15:17.494265    2202 out.go:177] * Verifying Kubernetes components...
	I0809 11:15:17.495463    2202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 11:15:17.517963    2202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 11:15:17.518283    2202 api_server.go:52] waiting for apiserver process to appear ...
	I0809 11:15:17.518314    2202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:15:17.535857    2202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:15:17.555385    2202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 11:15:17.986338    2202 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0809 11:15:17.986366    2202 api_server.go:72] duration metric: took 499.164208ms to wait for apiserver process to appear ...
	I0809 11:15:17.986369    2202 api_server.go:88] waiting for apiserver healthz status ...
	I0809 11:15:17.986380    2202 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0809 11:15:17.989523    2202 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0809 11:15:17.990182    2202 api_server.go:141] control plane version: v1.27.4
	I0809 11:15:17.990186    2202 api_server.go:131] duration metric: took 3.815292ms to wait for apiserver health ...
	I0809 11:15:17.990189    2202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 11:15:17.993016    2202 system_pods.go:59] 4 kube-system pods found
	I0809 11:15:17.993025    2202 system_pods.go:61] "etcd-image-340000" [f51463df-a805-4c94-af6d-6b7f4b94541e] Pending
	I0809 11:15:17.993028    2202 system_pods.go:61] "kube-apiserver-image-340000" [7d3aaf89-877a-43d1-ba22-10ec624618f6] Pending
	I0809 11:15:17.993030    2202 system_pods.go:61] "kube-controller-manager-image-340000" [39f56e9d-b160-4d3a-908c-202ac632bc40] Pending
	I0809 11:15:17.993032    2202 system_pods.go:61] "kube-scheduler-image-340000" [eb0e0dbb-4195-4fee-8ba7-709fd4ae2017] Pending
	I0809 11:15:17.993034    2202 system_pods.go:74] duration metric: took 2.843ms to wait for pod list to return data ...
	I0809 11:15:17.993037    2202 kubeadm.go:581] duration metric: took 505.835541ms to wait for : map[apiserver:true system_pods:true] ...
	I0809 11:15:17.993042    2202 node_conditions.go:102] verifying NodePressure condition ...
	I0809 11:15:17.994332    2202 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0809 11:15:17.994340    2202 node_conditions.go:123] node cpu capacity is 2
	I0809 11:15:17.994345    2202 node_conditions.go:105] duration metric: took 1.301333ms to run NodePressure ...
	I0809 11:15:17.994349    2202 start.go:228] waiting for startup goroutines ...
	I0809 11:15:18.036722    2202 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0809 11:15:18.044937    2202 addons.go:502] enable addons completed in 576.152125ms: enabled=[default-storageclass storage-provisioner]
	I0809 11:15:18.044952    2202 start.go:233] waiting for cluster config update ...
	I0809 11:15:18.044959    2202 start.go:242] writing updated cluster config ...
	I0809 11:15:18.045235    2202 ssh_runner.go:195] Run: rm -f paused
	I0809 11:15:18.072126    2202 start.go:599] kubectl: 1.27.2, cluster: 1.27.4 (minor skew: 0)
	I0809 11:15:18.076032    2202 out.go:177] * Done! kubectl is now configured to use "image-340000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-09 18:14:59 UTC, ends at Wed 2023-08-09 18:15:19 UTC. --
	Aug 09 18:15:12 image-340000 cri-dockerd[1058]: time="2023-08-09T18:15:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e92d40eada6320a795715ebcdbe774fc936ef925f21b0221cfe4a016988878b9/resolv.conf as [nameserver 192.168.105.1]"
	Aug 09 18:15:12 image-340000 cri-dockerd[1058]: time="2023-08-09T18:15:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b4b516f63b2117f7b7a9968d633d25d6d7c409db0c13533b0e23422745694d5b/resolv.conf as [nameserver 192.168.105.1]"
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.611215965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.611272048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.611284506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.611293090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.622356548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.622452381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.622479631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.622503965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.656109715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.656150798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.656229631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:15:12 image-340000 dockerd[1166]: time="2023-08-09T18:15:12.656237090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:18 image-340000 dockerd[1160]: time="2023-08-09T18:15:18.667959926Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 09 18:15:18 image-340000 dockerd[1160]: time="2023-08-09T18:15:18.793003093Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 09 18:15:18 image-340000 dockerd[1160]: time="2023-08-09T18:15:18.808202343Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Aug 09 18:15:18 image-340000 dockerd[1166]: time="2023-08-09T18:15:18.840620884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:15:18 image-340000 dockerd[1166]: time="2023-08-09T18:15:18.840648759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:18 image-340000 dockerd[1166]: time="2023-08-09T18:15:18.840659718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:15:18 image-340000 dockerd[1166]: time="2023-08-09T18:15:18.840664593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:15:19 image-340000 dockerd[1160]: time="2023-08-09T18:15:19.589649708Z" level=info msg="ignoring event" container=25b38d0dbf4742424f055a5d07c209c7aa92cc1e03dfb99d6048555d1d669dd2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:15:19 image-340000 dockerd[1166]: time="2023-08-09T18:15:19.589769583Z" level=info msg="shim disconnected" id=25b38d0dbf4742424f055a5d07c209c7aa92cc1e03dfb99d6048555d1d669dd2 namespace=moby
	Aug 09 18:15:19 image-340000 dockerd[1166]: time="2023-08-09T18:15:19.589799167Z" level=warning msg="cleaning up after shim disconnected" id=25b38d0dbf4742424f055a5d07c209c7aa92cc1e03dfb99d6048555d1d669dd2 namespace=moby
	Aug 09 18:15:19 image-340000 dockerd[1166]: time="2023-08-09T18:15:19.589803625Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a77d2ae656149       6eb63895cb67f       7 seconds ago       Running             kube-scheduler            0                   b4b516f63b211
	7ed6f0fc921c3       24bc64e911039       7 seconds ago       Running             etcd                      0                   e92d40eada632
	2e91c09c04aba       389f6f052cf83       7 seconds ago       Running             kube-controller-manager   0                   ceb1be104b880
	e7eceddeeb8b4       64aece92d6bde       7 seconds ago       Running             kube-apiserver            0                   cd1195bd6b18e
	
	* 
	* ==> describe nodes <==
	* Name:               image-340000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-340000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=image-340000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T11_15_17_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:15:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-340000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:15:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:15:16 +0000   Wed, 09 Aug 2023 18:15:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:15:16 +0000   Wed, 09 Aug 2023 18:15:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:15:16 +0000   Wed, 09 Aug 2023 18:15:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 09 Aug 2023 18:15:16 +0000   Wed, 09 Aug 2023 18:15:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-340000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 3981aa1c1c4a477da35ccff398343cb0
	  System UUID:                3981aa1c1c4a477da35ccff398343cb0
	  Boot ID:                    49b4de48-e752-4706-8723-4e9852627198
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-340000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-340000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-340000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-340000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node image-340000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node image-340000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node image-340000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Aug 9 18:14] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.652809] EINJ: EINJ table not found.
	[  +0.528155] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043392] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Aug 9 18:15] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.058427] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.439868] systemd-fstab-generator[759]: Ignoring "noauto" for root device
	[  +0.144342] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.060908] systemd-fstab-generator[805]: Ignoring "noauto" for root device
	[  +0.063923] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +1.235723] systemd-fstab-generator[977]: Ignoring "noauto" for root device
	[  +0.057304] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +0.061780] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[  +0.061566] systemd-fstab-generator[1010]: Ignoring "noauto" for root device
	[  +0.069875] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +2.503005] systemd-fstab-generator[1153]: Ignoring "noauto" for root device
	[  +1.442218] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.004154] systemd-fstab-generator[1484]: Ignoring "noauto" for root device
	[  +5.143169] systemd-fstab-generator[2364]: Ignoring "noauto" for root device
	[  +2.209223] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [7ed6f0fc921c] <==
	* {"level":"info","ts":"2023-08-09T18:15:12.947Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T18:15:12.947Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T18:15:12.947Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T18:15:12.947Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-09T18:15:12.947Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-08-09T18:15:12.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-08-09T18:15:12.950Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-08-09T18:15:13.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-08-09T18:15:13.500Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:15:13.500Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-340000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T18:15:13.501Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:15:20 up 0 min,  0 users,  load average: 0.44, 0.10, 0.03
	Linux image-340000 5.10.57 #1 SMP PREEMPT Mon Jul 31 23:05:09 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e7eceddeeb8b] <==
	* I0809 18:15:14.197601       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0809 18:15:14.197608       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0809 18:15:14.198057       1 controller.go:624] quota admission added evaluator for: namespaces
	I0809 18:15:14.208196       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0809 18:15:14.208253       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 18:15:14.208475       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0809 18:15:14.208519       1 aggregator.go:152] initial CRD sync complete...
	I0809 18:15:14.208536       1 autoregister_controller.go:141] Starting autoregister controller
	I0809 18:15:14.208542       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0809 18:15:14.208545       1 cache.go:39] Caches are synced for autoregister controller
	I0809 18:15:14.227619       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 18:15:14.947427       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 18:15:15.108106       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0809 18:15:15.113828       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0809 18:15:15.113854       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0809 18:15:15.272028       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 18:15:15.282528       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0809 18:15:15.383866       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0809 18:15:15.386471       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0809 18:15:15.386965       1 controller.go:624] quota admission added evaluator for: endpoints
	I0809 18:15:15.388572       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0809 18:15:16.176389       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0809 18:15:16.748278       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0809 18:15:16.752499       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0809 18:15:16.760182       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [2e91c09c04ab] <==
	* E0809 18:15:18.374317       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0809 18:15:18.374329       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	I0809 18:15:18.575881       1 controllermanager.go:638] "Started controller" controller="disruption"
	I0809 18:15:18.575915       1 disruption.go:423] Sending events to api server.
	I0809 18:15:18.575928       1 disruption.go:434] Starting disruption controller
	I0809 18:15:18.575934       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0809 18:15:18.724322       1 controllermanager.go:638] "Started controller" controller="tokencleaner"
	I0809 18:15:18.724350       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0809 18:15:18.724354       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0809 18:15:18.724357       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0809 18:15:19.477106       1 controllermanager.go:638] "Started controller" controller="replicaset"
	I0809 18:15:19.477170       1 replica_set.go:201] "Starting controller" name="replicaset"
	I0809 18:15:19.477175       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0809 18:15:19.626124       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0809 18:15:19.626158       1 ttl_controller.go:124] "Starting TTL controller"
	I0809 18:15:19.626164       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0809 18:15:19.777286       1 controllermanager.go:638] "Started controller" controller="pvc-protection"
	I0809 18:15:19.777355       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0809 18:15:19.777364       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0809 18:15:19.928571       1 controllermanager.go:638] "Started controller" controller="endpointslicemirroring"
	I0809 18:15:19.928615       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0809 18:15:19.928620       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0809 18:15:20.076338       1 controllermanager.go:638] "Started controller" controller="serviceaccount"
	I0809 18:15:20.076368       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0809 18:15:20.076373       1 shared_informer.go:311] Waiting for caches to sync for service account
	
	* 
	* ==> kube-scheduler [a77d2ae65614] <==
	* W0809 18:15:14.184448       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 18:15:14.184659       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 18:15:14.184458       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0809 18:15:14.184669       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0809 18:15:14.184485       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0809 18:15:14.184701       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0809 18:15:14.998028       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:15:14.998068       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0809 18:15:15.021582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0809 18:15:15.021609       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0809 18:15:15.068930       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:15:15.068971       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 18:15:15.106829       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:15:15.107129       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0809 18:15:15.147138       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 18:15:15.147183       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 18:15:15.153012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0809 18:15:15.153066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0809 18:15:15.155809       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 18:15:15.155869       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0809 18:15:15.188431       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:15:15.188516       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0809 18:15:15.191025       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 18:15:15.191082       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0809 18:15:15.681723       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-09 18:14:59 UTC, ends at Wed 2023-08-09 18:15:20 UTC. --
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.893904    2370 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.893926    2370 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.900377    2370 kubelet_node_status.go:70] "Attempting to register node" node="image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.905150    2370 kubelet_node_status.go:108] "Node was previously registered" node="image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.905217    2370 kubelet_node_status.go:73] "Successfully registered node" node="image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993562    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3095bc5c680b89701c2bb07e64a9521-flexvolume-dir\") pod \"kube-controller-manager-image-340000\" (UID: \"d3095bc5c680b89701c2bb07e64a9521\") " pod="kube-system/kube-controller-manager-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993605    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3095bc5c680b89701c2bb07e64a9521-kubeconfig\") pod \"kube-controller-manager-image-340000\" (UID: \"d3095bc5c680b89701c2bb07e64a9521\") " pod="kube-system/kube-controller-manager-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993665    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f851315c9492611b425c41371f68f1fa-etcd-data\") pod \"etcd-image-340000\" (UID: \"f851315c9492611b425c41371f68f1fa\") " pod="kube-system/etcd-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993722    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ce4165f6a87e020014c5217f80a99b3-k8s-certs\") pod \"kube-apiserver-image-340000\" (UID: \"0ce4165f6a87e020014c5217f80a99b3\") " pod="kube-system/kube-apiserver-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993737    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ce4165f6a87e020014c5217f80a99b3-usr-share-ca-certificates\") pod \"kube-apiserver-image-340000\" (UID: \"0ce4165f6a87e020014c5217f80a99b3\") " pod="kube-system/kube-apiserver-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993749    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3095bc5c680b89701c2bb07e64a9521-ca-certs\") pod \"kube-controller-manager-image-340000\" (UID: \"d3095bc5c680b89701c2bb07e64a9521\") " pod="kube-system/kube-controller-manager-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993758    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3095bc5c680b89701c2bb07e64a9521-k8s-certs\") pod \"kube-controller-manager-image-340000\" (UID: \"d3095bc5c680b89701c2bb07e64a9521\") " pod="kube-system/kube-controller-manager-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993768    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3095bc5c680b89701c2bb07e64a9521-usr-share-ca-certificates\") pod \"kube-controller-manager-image-340000\" (UID: \"d3095bc5c680b89701c2bb07e64a9521\") " pod="kube-system/kube-controller-manager-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993808    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4101ebdcdadb0c320c318beb663d9244-kubeconfig\") pod \"kube-scheduler-image-340000\" (UID: \"4101ebdcdadb0c320c318beb663d9244\") " pod="kube-system/kube-scheduler-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993821    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f851315c9492611b425c41371f68f1fa-etcd-certs\") pod \"etcd-image-340000\" (UID: \"f851315c9492611b425c41371f68f1fa\") " pod="kube-system/etcd-image-340000"
	Aug 09 18:15:16 image-340000 kubelet[2370]: I0809 18:15:16.993831    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ce4165f6a87e020014c5217f80a99b3-ca-certs\") pod \"kube-apiserver-image-340000\" (UID: \"0ce4165f6a87e020014c5217f80a99b3\") " pod="kube-system/kube-apiserver-image-340000"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.781405    2370 apiserver.go:52] "Watching apiserver"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.792782    2370 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.804986    2370 reconciler.go:41] "Reconciler: start to sync state"
	Aug 09 18:15:17 image-340000 kubelet[2370]: E0809 18:15:17.856112    2370 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-340000\" already exists" pod="kube-system/kube-apiserver-image-340000"
	Aug 09 18:15:17 image-340000 kubelet[2370]: E0809 18:15:17.856755    2370 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-340000\" already exists" pod="kube-system/kube-scheduler-image-340000"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.866050    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-340000" podStartSLOduration=1.865901509 podCreationTimestamp="2023-08-09 18:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:15:17.865351259 +0000 UTC m=+1.129313543" watchObservedRunningTime="2023-08-09 18:15:17.865901509 +0000 UTC m=+1.129863793"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.872844    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-340000" podStartSLOduration=1.872802884 podCreationTimestamp="2023-08-09 18:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:15:17.869028967 +0000 UTC m=+1.132991210" watchObservedRunningTime="2023-08-09 18:15:17.872802884 +0000 UTC m=+1.136765168"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.876191    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-340000" podStartSLOduration=1.876175592 podCreationTimestamp="2023-08-09 18:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:15:17.872969842 +0000 UTC m=+1.136932085" watchObservedRunningTime="2023-08-09 18:15:17.876175592 +0000 UTC m=+1.140137877"
	Aug 09 18:15:17 image-340000 kubelet[2370]: I0809 18:15:17.880318    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-340000" podStartSLOduration=1.880289509 podCreationTimestamp="2023-08-09 18:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:15:17.876280592 +0000 UTC m=+1.140242877" watchObservedRunningTime="2023-08-09 18:15:17.880289509 +0000 UTC m=+1.144251793"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-340000 -n image-340000
helpers_test.go:261: (dbg) Run:  kubectl --context image-340000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-340000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-340000 describe pod storage-provisioner: exit status 1 (37.510166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-340000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-050000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-050000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.958365709s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-050000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-050000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ed4ceb24-e141-44c1-a506-1e33eb190e2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ed4ceb24-e141-44c1-a506-1e33eb190e2b] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.013516833s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-050000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.0373325s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons disable ingress-dns --alsologtostderr -v=1: (4.693525333s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons disable ingress --alsologtostderr -v=1: (7.073929125s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-050000 -n ingress-addon-legacy-050000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-901000 image ls                               | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image   | functional-901000 image load                             | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-901000 image ls                               | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| image   | functional-901000 image save --daemon                    | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-901000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-901000                                        | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-901000                                        | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-901000 ssh pgrep                              | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-901000                                        | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-901000 image build -t                         | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | localhost/my-image:functional-901000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-901000                                        | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-901000 image ls                               | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| delete  | -p functional-901000                                     | functional-901000           | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:14 PDT |
	| start   | -p image-340000 --driver=qemu2                           | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:14 PDT | 09 Aug 23 11:15 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-340000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-340000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-340000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-340000                                          |                             |         |         |                     |                     |
	| delete  | -p image-340000                                          | image-340000                | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:15 PDT |
	| start   | -p ingress-addon-legacy-050000                           | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:15 PDT | 09 Aug 23 11:16 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-050000                              | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:16 PDT | 09 Aug 23 11:16 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-050000                              | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:16 PDT | 09 Aug 23 11:16 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-050000                              | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:17 PDT | 09 Aug 23 11:17 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-050000 ip                           | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:17 PDT | 09 Aug 23 11:17 PDT |
	| addons  | ingress-addon-legacy-050000                              | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:17 PDT | 09 Aug 23 11:17 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-050000                              | ingress-addon-legacy-050000 | jenkins | v1.31.1 | 09 Aug 23 11:17 PDT | 09 Aug 23 11:17 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:15:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:15:20.587906    2243 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:15:20.588007    2243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:15:20.588010    2243 out.go:309] Setting ErrFile to fd 2...
	I0809 11:15:20.588012    2243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:15:20.588130    2243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:15:20.589113    2243 out.go:303] Setting JSON to false
	I0809 11:15:20.604255    2243 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":894,"bootTime":1691604026,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:15:20.604322    2243 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:15:20.608070    2243 out.go:177] * [ingress-addon-legacy-050000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:15:20.616045    2243 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:15:20.616090    2243 notify.go:220] Checking for updates...
	I0809 11:15:20.620117    2243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:15:20.621072    2243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:15:20.624118    2243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:15:20.627047    2243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:15:20.628035    2243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:15:20.631216    2243 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:15:20.635074    2243 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:15:20.640052    2243 start.go:298] selected driver: qemu2
	I0809 11:15:20.640057    2243 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:15:20.640063    2243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:15:20.641943    2243 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:15:20.645009    2243 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:15:20.648215    2243 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:15:20.648252    2243 cni.go:84] Creating CNI manager for ""
	I0809 11:15:20.648258    2243 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:15:20.648264    2243 start_flags.go:319] config:
	{Name:ingress-addon-legacy-050000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0}
	I0809 11:15:20.652346    2243 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:15:20.659078    2243 out.go:177] * Starting control plane node ingress-addon-legacy-050000 in cluster ingress-addon-legacy-050000
	I0809 11:15:20.662954    2243 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0809 11:15:20.720883    2243 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0809 11:15:20.720897    2243 cache.go:57] Caching tarball of preloaded images
	I0809 11:15:20.721084    2243 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0809 11:15:20.726127    2243 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0809 11:15:20.736079    2243 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:15:20.817748    2243 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0809 11:15:26.413858    2243 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:15:26.413990    2243 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:15:27.163047    2243 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0809 11:15:27.163228    2243 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/config.json ...
	I0809 11:15:27.163250    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/config.json: {Name:mk928a96c7653235e4761757cfa478789f8f9764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:27.163482    2243 start.go:365] acquiring machines lock for ingress-addon-legacy-050000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:15:27.163507    2243 start.go:369] acquired machines lock for "ingress-addon-legacy-050000" in 19.291µs
	I0809 11:15:27.163515    2243 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:15:27.163582    2243 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:15:27.174540    2243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0809 11:15:27.188970    2243 start.go:159] libmachine.API.Create for "ingress-addon-legacy-050000" (driver="qemu2")
	I0809 11:15:27.188999    2243 client.go:168] LocalClient.Create starting
	I0809 11:15:27.189070    2243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:15:27.189101    2243 main.go:141] libmachine: Decoding PEM data...
	I0809 11:15:27.189113    2243 main.go:141] libmachine: Parsing certificate...
	I0809 11:15:27.189151    2243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:15:27.189168    2243 main.go:141] libmachine: Decoding PEM data...
	I0809 11:15:27.189175    2243 main.go:141] libmachine: Parsing certificate...
	I0809 11:15:27.189515    2243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:15:27.309223    2243 main.go:141] libmachine: Creating SSH key...
	I0809 11:15:27.406212    2243 main.go:141] libmachine: Creating Disk image...
	I0809 11:15:27.406218    2243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:15:27.406350    2243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2
	I0809 11:15:27.414811    2243 main.go:141] libmachine: STDOUT: 
	I0809 11:15:27.414827    2243 main.go:141] libmachine: STDERR: 
	I0809 11:15:27.414885    2243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2 +20000M
	I0809 11:15:27.422163    2243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:15:27.422175    2243 main.go:141] libmachine: STDERR: 
	I0809 11:15:27.422196    2243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2
	I0809 11:15:27.422204    2243 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:15:27.422247    2243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ad:62:e6:59:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/disk.qcow2
	I0809 11:15:27.456363    2243 main.go:141] libmachine: STDOUT: 
	I0809 11:15:27.456403    2243 main.go:141] libmachine: STDERR: 
	I0809 11:15:27.456408    2243 main.go:141] libmachine: Attempt 0
	I0809 11:15:27.456426    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:27.456495    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:27.456513    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:27.456524    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:27.456529    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:27.456535    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:29.458616    2243 main.go:141] libmachine: Attempt 1
	I0809 11:15:29.458710    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:29.459125    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:29.459176    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:29.459240    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:29.459273    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:29.459304    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:31.461391    2243 main.go:141] libmachine: Attempt 2
	I0809 11:15:31.461441    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:31.461543    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:31.461555    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:31.461560    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:31.461565    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:31.461569    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:33.463549    2243 main.go:141] libmachine: Attempt 3
	I0809 11:15:33.463565    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:33.463639    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:33.463648    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:33.463653    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:33.463659    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:33.463663    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:35.465616    2243 main.go:141] libmachine: Attempt 4
	I0809 11:15:35.465626    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:35.465663    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:35.465670    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:35.465677    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:35.465683    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:35.465689    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:37.466300    2243 main.go:141] libmachine: Attempt 5
	I0809 11:15:37.466318    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:37.466387    2243 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0809 11:15:37.466396    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:d2:84:56:32:32:4d ID:1,d2:84:56:32:32:4d Lease:0x64d52923}
	I0809 11:15:37.466401    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:b2:36:74:10:3e ID:1,d6:b2:36:74:10:3e Lease:0x64d5283a}
	I0809 11:15:37.466406    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:16:25:b8:60:50:4 ID:1,16:25:b8:60:50:4 Lease:0x64d3d6ad}
	I0809 11:15:37.466412    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:7e:6b:2e:9d:a1:90 ID:1,7e:6b:2e:9d:a1:90 Lease:0x64d527ea}
	I0809 11:15:39.468432    2243 main.go:141] libmachine: Attempt 6
	I0809 11:15:39.468494    2243 main.go:141] libmachine: Searching for e2:ad:62:e6:59:2 in /var/db/dhcpd_leases ...
	I0809 11:15:39.468638    2243 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0809 11:15:39.468657    2243 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:e2:ad:62:e6:59:2 ID:1,e2:ad:62:e6:59:2 Lease:0x64d5294a}
	I0809 11:15:39.468666    2243 main.go:141] libmachine: Found match: e2:ad:62:e6:59:2
	I0809 11:15:39.468679    2243 main.go:141] libmachine: IP: 192.168.105.6
	I0809 11:15:39.468689    2243 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0809 11:15:41.491200    2243 machine.go:88] provisioning docker machine ...
	I0809 11:15:41.491279    2243 buildroot.go:166] provisioning hostname "ingress-addon-legacy-050000"
	I0809 11:15:41.491502    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:41.492400    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:41.492431    2243 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-050000 && echo "ingress-addon-legacy-050000" | sudo tee /etc/hostname
	I0809 11:15:41.593979    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-050000
	
	I0809 11:15:41.594105    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:41.594579    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:41.594595    2243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-050000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-050000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-050000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 11:15:41.677458    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 11:15:41.677478    2243 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17011-995/.minikube CaCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17011-995/.minikube}
	I0809 11:15:41.677488    2243 buildroot.go:174] setting up certificates
	I0809 11:15:41.677496    2243 provision.go:83] configureAuth start
	I0809 11:15:41.677506    2243 provision.go:138] copyHostCerts
	I0809 11:15:41.677559    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem
	I0809 11:15:41.677636    2243 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem, removing ...
	I0809 11:15:41.677644    2243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem
	I0809 11:15:41.677855    2243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/key.pem (1679 bytes)
	I0809 11:15:41.678081    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem
	I0809 11:15:41.678105    2243 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem, removing ...
	I0809 11:15:41.678108    2243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem
	I0809 11:15:41.678172    2243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/ca.pem (1082 bytes)
	I0809 11:15:41.678269    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem
	I0809 11:15:41.678292    2243 exec_runner.go:144] found /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem, removing ...
	I0809 11:15:41.678296    2243 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem
	I0809 11:15:41.678363    2243 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17011-995/.minikube/cert.pem (1123 bytes)
	I0809 11:15:41.678462    2243 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-050000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-050000]
	I0809 11:15:41.788815    2243 provision.go:172] copyRemoteCerts
	I0809 11:15:41.788859    2243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 11:15:41.788870    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:15:41.827479    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0809 11:15:41.827533    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 11:15:41.834971    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0809 11:15:41.835014    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0809 11:15:41.842170    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0809 11:15:41.842224    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 11:15:41.849136    2243 provision.go:86] duration metric: configureAuth took 171.636584ms
	I0809 11:15:41.849143    2243 buildroot.go:189] setting minikube options for container-runtime
	I0809 11:15:41.849242    2243 config.go:182] Loaded profile config "ingress-addon-legacy-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0809 11:15:41.849278    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:41.849501    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:41.849509    2243 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0809 11:15:41.921961    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0809 11:15:41.921968    2243 buildroot.go:70] root file system type: tmpfs
	I0809 11:15:41.922032    2243 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0809 11:15:41.922089    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:41.922349    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:41.922389    2243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0809 11:15:41.998199    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0809 11:15:41.998243    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:41.998511    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:41.998522    2243 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0809 11:15:42.368742    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0809 11:15:42.368755    2243 machine.go:91] provisioned docker machine in 877.550334ms
	I0809 11:15:42.368760    2243 client.go:171] LocalClient.Create took 15.180275208s
	I0809 11:15:42.368776    2243 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-050000" took 15.18032425s
	I0809 11:15:42.368786    2243 start.go:300] post-start starting for "ingress-addon-legacy-050000" (driver="qemu2")
	I0809 11:15:42.368791    2243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 11:15:42.368869    2243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 11:15:42.368879    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:15:42.410588    2243 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 11:15:42.412259    2243 info.go:137] Remote host: Buildroot 2021.02.12
	I0809 11:15:42.412269    2243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/addons for local assets ...
	I0809 11:15:42.412345    2243 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17011-995/.minikube/files for local assets ...
	I0809 11:15:42.412450    2243 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem -> 14102.pem in /etc/ssl/certs
	I0809 11:15:42.412456    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem -> /etc/ssl/certs/14102.pem
	I0809 11:15:42.412575    2243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 11:15:42.415488    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:15:42.423046    2243 start.go:303] post-start completed in 54.256917ms
	I0809 11:15:42.423487    2243 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/config.json ...
	I0809 11:15:42.423645    2243 start.go:128] duration metric: createHost completed in 15.260578917s
	I0809 11:15:42.423675    2243 main.go:141] libmachine: Using SSH client type: native
	I0809 11:15:42.423909    2243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105281590] 0x105283ff0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0809 11:15:42.423913    2243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0809 11:15:42.494606    2243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1691604942.632691752
	
	I0809 11:15:42.494618    2243 fix.go:206] guest clock: 1691604942.632691752
	I0809 11:15:42.494623    2243 fix.go:219] Guest: 2023-08-09 11:15:42.632691752 -0700 PDT Remote: 2023-08-09 11:15:42.423648 -0700 PDT m=+21.856629417 (delta=209.043752ms)
	I0809 11:15:42.494634    2243 fix.go:190] guest clock delta is within tolerance: 209.043752ms
	I0809 11:15:42.494638    2243 start.go:83] releasing machines lock for "ingress-addon-legacy-050000", held for 15.331648916s
	I0809 11:15:42.494954    2243 ssh_runner.go:195] Run: cat /version.json
	I0809 11:15:42.494957    2243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 11:15:42.494963    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:15:42.494974    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:15:42.534565    2243 ssh_runner.go:195] Run: systemctl --version
	I0809 11:15:42.575376    2243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0809 11:15:42.577665    2243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0809 11:15:42.577702    2243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0809 11:15:42.581386    2243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0809 11:15:42.587046    2243 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0809 11:15:42.587054    2243 start.go:466] detecting cgroup driver to use...
	I0809 11:15:42.587129    2243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:15:42.594991    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0809 11:15:42.598269    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0809 11:15:42.601210    2243 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0809 11:15:42.601244    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0809 11:15:42.604118    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:15:42.607439    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0809 11:15:42.610751    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0809 11:15:42.613618    2243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 11:15:42.616309    2243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0809 11:15:42.619902    2243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 11:15:42.623363    2243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 11:15:42.626435    2243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:42.690220    2243 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0809 11:15:42.696637    2243 start.go:466] detecting cgroup driver to use...
	I0809 11:15:42.696724    2243 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0809 11:15:42.703013    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:15:42.708884    2243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 11:15:42.719743    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 11:15:42.725534    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:15:42.730454    2243 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0809 11:15:42.767293    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0809 11:15:42.773440    2243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 11:15:42.779718    2243 ssh_runner.go:195] Run: which cri-dockerd
	I0809 11:15:42.781398    2243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0809 11:15:42.784276    2243 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0809 11:15:42.790351    2243 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0809 11:15:42.844314    2243 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0809 11:15:42.924666    2243 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0809 11:15:42.924684    2243 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0809 11:15:42.929883    2243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:43.009755    2243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:15:44.167379    2243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157648125s)
	I0809 11:15:44.167446    2243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:15:44.177098    2243 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0809 11:15:44.193508    2243 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0809 11:15:44.193640    2243 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0809 11:15:44.194914    2243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:15:44.198957    2243 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0809 11:15:44.199011    2243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:15:44.211109    2243 docker.go:636] Got preloaded images: 
	I0809 11:15:44.211118    2243 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0809 11:15:44.211161    2243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:15:44.214308    2243 ssh_runner.go:195] Run: which lz4
	I0809 11:15:44.215446    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0809 11:15:44.215540    2243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0809 11:15:44.216818    2243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0809 11:15:44.216829    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0809 11:15:45.933013    2243 docker.go:600] Took 1.717579 seconds to copy over tarball
	I0809 11:15:45.933079    2243 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0809 11:15:47.219344    2243 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.286295958s)
	I0809 11:15:47.219358    2243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0809 11:15:47.244133    2243 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0809 11:15:47.249362    2243 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0809 11:15:47.255104    2243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 11:15:47.339122    2243 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0809 11:15:48.866777    2243 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.527691s)
	I0809 11:15:48.866884    2243 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0809 11:15:48.872721    2243 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0809 11:15:48.872737    2243 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0809 11:15:48.872741    2243 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0809 11:15:48.888046    2243 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0809 11:15:48.888558    2243 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:48.888599    2243 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 11:15:48.888619    2243 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 11:15:48.892095    2243 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 11:15:48.894622    2243 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0809 11:15:48.896021    2243 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0809 11:15:48.896058    2243 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 11:15:48.900340    2243 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0809 11:15:48.900445    2243 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:48.900550    2243 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 11:15:48.900552    2243 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 11:15:48.902943    2243 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 11:15:48.903458    2243 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 11:15:48.903489    2243 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0809 11:15:48.903745    2243 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0809 11:15:49.466382    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0809 11:15:49.472789    2243 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0809 11:15:49.472820    2243 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0809 11:15:49.472866    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0809 11:15:49.478696    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0809 11:15:49.752417    2243 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:49.752545    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0809 11:15:49.758806    2243 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0809 11:15:49.758834    2243 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 11:15:49.758886    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0809 11:15:49.764575    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0809 11:15:49.899919    2243 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:49.900054    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0809 11:15:49.905841    2243 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0809 11:15:49.905863    2243 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 11:15:49.905902    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0809 11:15:49.911846    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0809 11:15:50.008097    2243 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:50.008191    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:50.014860    2243 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0809 11:15:50.014885    2243 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:50.014926    2243 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:15:50.035340    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0809 11:15:50.137884    2243 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:50.137978    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0809 11:15:50.144350    2243 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0809 11:15:50.144375    2243 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 11:15:50.144428    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0809 11:15:50.150425    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0809 11:15:50.320259    2243 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:50.320375    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 11:15:50.326960    2243 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0809 11:15:50.326984    2243 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 11:15:50.327032    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 11:15:50.339126    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0809 11:15:50.550555    2243 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:50.550701    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0809 11:15:50.557496    2243 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0809 11:15:50.557526    2243 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0809 11:15:50.557582    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0809 11:15:50.562667    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0809 11:15:50.762718    2243 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0809 11:15:50.763329    2243 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0809 11:15:50.784429    2243 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0809 11:15:50.784486    2243 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0809 11:15:50.784585    2243 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0809 11:15:50.798160    2243 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0809 11:15:50.798234    2243 cache_images.go:92] LoadImages completed in 1.925550959s
	W0809 11:15:50.798316    2243 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0809 11:15:50.798422    2243 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0809 11:15:50.813909    2243 cni.go:84] Creating CNI manager for ""
	I0809 11:15:50.813929    2243 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:15:50.813957    2243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 11:15:50.813980    2243 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-050000 NodeName:ingress-addon-legacy-050000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0809 11:15:50.814137    2243 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-050000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 11:15:50.814212    2243 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-050000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 11:15:50.814301    2243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0809 11:15:50.820028    2243 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 11:15:50.820093    2243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 11:15:50.824465    2243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0809 11:15:50.831629    2243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0809 11:15:50.838367    2243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0809 11:15:50.844292    2243 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0809 11:15:50.845736    2243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 11:15:50.849513    2243 certs.go:56] Setting up /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000 for IP: 192.168.105.6
	I0809 11:15:50.849522    2243 certs.go:190] acquiring lock for shared ca certs: {Name:mkc408918270161d0a558be6b69aedd9ebd20eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:50.849668    2243 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key
	I0809 11:15:50.849710    2243 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key
	I0809 11:15:50.849736    2243 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key
	I0809 11:15:50.849744    2243 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt with IP's: []
	I0809 11:15:50.940739    2243 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt ...
	I0809 11:15:50.940743    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: {Name:mk314e0dfe05ce835712a5d17b6a2cec94d11e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:50.940976    2243 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key ...
	I0809 11:15:50.940984    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key: {Name:mkd706a70b166fa90217b43aceb7f26b0aa53598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:50.941104    2243 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key.b354f644
	I0809 11:15:50.941113    2243 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 11:15:51.088779    2243 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt.b354f644 ...
	I0809 11:15:51.088783    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt.b354f644: {Name:mkdcb92c7af6d1da98f7cff21fdc1597562d074e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:51.088952    2243 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key.b354f644 ...
	I0809 11:15:51.088954    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key.b354f644: {Name:mk3f460947326c087e6cde9dff01c09e9e466902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:51.089055    2243 certs.go:337] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt
	I0809 11:15:51.089360    2243 certs.go:341] copying /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key
	I0809 11:15:51.089458    2243 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.key
	I0809 11:15:51.089465    2243 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.crt with IP's: []
	I0809 11:15:51.123239    2243 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.crt ...
	I0809 11:15:51.123242    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.crt: {Name:mkb02d38ad7e83a057b3a28b879975aecd3d76e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:51.123388    2243 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.key ...
	I0809 11:15:51.123391    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.key: {Name:mk30f9f7a87d4e421eaa3cdbb03da18208384c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:15:51.123507    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0809 11:15:51.123525    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0809 11:15:51.123542    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0809 11:15:51.123554    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0809 11:15:51.123566    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0809 11:15:51.123578    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0809 11:15:51.123588    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0809 11:15:51.123599    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0809 11:15:51.123689    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem (1338 bytes)
	W0809 11:15:51.123719    2243 certs.go:433] ignoring /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410_empty.pem, impossibly tiny 0 bytes
	I0809 11:15:51.123728    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca-key.pem (1679 bytes)
	I0809 11:15:51.123754    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem (1082 bytes)
	I0809 11:15:51.123780    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem (1123 bytes)
	I0809 11:15:51.123808    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/Users/jenkins/minikube-integration/17011-995/.minikube/certs/key.pem (1679 bytes)
	I0809 11:15:51.123861    2243 certs.go:437] found cert: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem (1708 bytes)
	I0809 11:15:51.123889    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem -> /usr/share/ca-certificates/14102.pem
	I0809 11:15:51.123900    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:51.123910    2243 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem -> /usr/share/ca-certificates/1410.pem
	I0809 11:15:51.124274    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 11:15:51.131639    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0809 11:15:51.138756    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 11:15:51.145863    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0809 11:15:51.152750    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 11:15:51.159751    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0809 11:15:51.167227    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 11:15:51.174617    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0809 11:15:51.181855    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/ssl/certs/14102.pem --> /usr/share/ca-certificates/14102.pem (1708 bytes)
	I0809 11:15:51.188586    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 11:15:51.195469    2243 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17011-995/.minikube/certs/1410.pem --> /usr/share/ca-certificates/1410.pem (1338 bytes)
	I0809 11:15:51.202764    2243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 11:15:51.208216    2243 ssh_runner.go:195] Run: openssl version
	I0809 11:15:51.210217    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 11:15:51.213327    2243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:51.214719    2243 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:51.214748    2243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 11:15:51.216712    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 11:15:51.219840    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1410.pem && ln -fs /usr/share/ca-certificates/1410.pem /etc/ssl/certs/1410.pem"
	I0809 11:15:51.223094    2243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1410.pem
	I0809 11:15:51.224605    2243 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:10 /usr/share/ca-certificates/1410.pem
	I0809 11:15:51.224623    2243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1410.pem
	I0809 11:15:51.226415    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1410.pem /etc/ssl/certs/51391683.0"
	I0809 11:15:51.229199    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14102.pem && ln -fs /usr/share/ca-certificates/14102.pem /etc/ssl/certs/14102.pem"
	I0809 11:15:51.232166    2243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14102.pem
	I0809 11:15:51.233817    2243 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:10 /usr/share/ca-certificates/14102.pem
	I0809 11:15:51.233840    2243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14102.pem
	I0809 11:15:51.235585    2243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14102.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 11:15:51.238880    2243 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 11:15:51.240248    2243 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 11:15:51.240275    2243 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:15:51.240347    2243 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0809 11:15:51.245842    2243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 11:15:51.248609    2243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 11:15:51.251463    2243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 11:15:51.254586    2243 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 11:15:51.254601    2243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0809 11:15:51.279868    2243 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0809 11:15:51.279895    2243 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 11:15:51.365270    2243 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 11:15:51.365356    2243 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 11:15:51.365404    2243 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 11:15:51.416995    2243 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 11:15:51.417321    2243 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 11:15:51.417361    2243 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 11:15:51.485077    2243 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 11:15:51.495293    2243 out.go:204]   - Generating certificates and keys ...
	I0809 11:15:51.495330    2243 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 11:15:51.495374    2243 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 11:15:51.522499    2243 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 11:15:51.570907    2243 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 11:15:51.671860    2243 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 11:15:51.740267    2243 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 11:15:51.905992    2243 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 11:15:51.906105    2243 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-050000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0809 11:15:51.995897    2243 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 11:15:51.995984    2243 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-050000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0809 11:15:52.195411    2243 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 11:15:52.240545    2243 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 11:15:52.372566    2243 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 11:15:52.372680    2243 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 11:15:52.612964    2243 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 11:15:52.673104    2243 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 11:15:52.727863    2243 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 11:15:52.782734    2243 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 11:15:52.783025    2243 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 11:15:52.789161    2243 out.go:204]   - Booting up control plane ...
	I0809 11:15:52.789207    2243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 11:15:52.789855    2243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 11:15:52.789906    2243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 11:15:52.790820    2243 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 11:15:52.794190    2243 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 11:16:04.299870    2243 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.505615 seconds
	I0809 11:16:04.300118    2243 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 11:16:04.324415    2243 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 11:16:04.838520    2243 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 11:16:04.838611    2243 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-050000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0809 11:16:05.349555    2243 kubeadm.go:322] [bootstrap-token] Using token: gwxkb4.uy29w598glo2pn8t
	I0809 11:16:05.353817    2243 out.go:204]   - Configuring RBAC rules ...
	I0809 11:16:05.353954    2243 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 11:16:05.355024    2243 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 11:16:05.360293    2243 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 11:16:05.363508    2243 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 11:16:05.366935    2243 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 11:16:05.372157    2243 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 11:16:05.380150    2243 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 11:16:05.563003    2243 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 11:16:05.757961    2243 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 11:16:05.758458    2243 kubeadm.go:322] 
	I0809 11:16:05.758505    2243 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 11:16:05.758508    2243 kubeadm.go:322] 
	I0809 11:16:05.758585    2243 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 11:16:05.758591    2243 kubeadm.go:322] 
	I0809 11:16:05.758606    2243 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 11:16:05.758649    2243 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 11:16:05.758682    2243 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 11:16:05.758684    2243 kubeadm.go:322] 
	I0809 11:16:05.758714    2243 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 11:16:05.758774    2243 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 11:16:05.758814    2243 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 11:16:05.758821    2243 kubeadm.go:322] 
	I0809 11:16:05.758890    2243 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 11:16:05.758955    2243 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 11:16:05.758959    2243 kubeadm.go:322] 
	I0809 11:16:05.759021    2243 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gwxkb4.uy29w598glo2pn8t \
	I0809 11:16:05.759084    2243 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 \
	I0809 11:16:05.759102    2243 kubeadm.go:322]     --control-plane 
	I0809 11:16:05.759105    2243 kubeadm.go:322] 
	I0809 11:16:05.759155    2243 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 11:16:05.759160    2243 kubeadm.go:322] 
	I0809 11:16:05.759207    2243 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gwxkb4.uy29w598glo2pn8t \
	I0809 11:16:05.759276    2243 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c906fcf1732ee135ed5d8c53a2456ece48422acee8957afd996ec13f4bd01100 
	I0809 11:16:05.759440    2243 kubeadm.go:322] W0809 18:15:51.418160    1404 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0809 11:16:05.759570    2243 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0809 11:16:05.759655    2243 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0809 11:16:05.759729    2243 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 11:16:05.759817    2243 kubeadm.go:322] W0809 18:15:52.924580    1404 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0809 11:16:05.759912    2243 kubeadm.go:322] W0809 18:15:52.925486    1404 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0809 11:16:05.759920    2243 cni.go:84] Creating CNI manager for ""
	I0809 11:16:05.759929    2243 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:16:05.759947    2243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 11:16:05.760028    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:05.760038    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=ingress-addon-legacy-050000 minikube.k8s.io/updated_at=2023_08_09T11_16_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:05.764914    2243 ops.go:34] apiserver oom_adj: -16
	I0809 11:16:05.884011    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:05.917026    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:06.451848    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:06.951712    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:07.451783    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:07.951814    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:08.451681    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:08.951697    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:09.451660    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:09.951692    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:10.451653    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:10.951397    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:11.451612    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:11.951457    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:12.451316    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:12.951562    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:13.451546    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:13.951285    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:14.451540    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:14.951514    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:15.451505    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:15.950623    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:16.451464    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:16.951392    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:17.451382    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:17.951435    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:18.451395    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:18.951337    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:19.451317    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:19.951126    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:20.451012    2243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 11:16:20.512933    2243 kubeadm.go:1081] duration metric: took 14.753480417s to wait for elevateKubeSystemPrivileges.
	I0809 11:16:20.512950    2243 kubeadm.go:406] StartCluster complete in 29.273673042s
	I0809 11:16:20.512959    2243 settings.go:142] acquiring lock: {Name:mkccab662ae5271e860bc4bdf3048d54a609848d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:16:20.513049    2243 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:16:20.513458    2243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/kubeconfig: {Name:mk08b0de0097dc34716acdd012f0f4571979d434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:16:20.513661    2243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 11:16:20.513715    2243 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0809 11:16:20.513758    2243 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-050000"
	I0809 11:16:20.513770    2243 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-050000"
	I0809 11:16:20.513779    2243 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-050000"
	I0809 11:16:20.513801    2243 host.go:66] Checking if "ingress-addon-legacy-050000" exists ...
	I0809 11:16:20.513804    2243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-050000"
	I0809 11:16:20.513920    2243 kapi.go:59] client config for ingress-addon-legacy-050000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key", CAFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065fc170), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 11:16:20.514036    2243 config.go:182] Loaded profile config "ingress-addon-legacy-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0809 11:16:20.514307    2243 cert_rotation.go:137] Starting client certificate rotation controller
	I0809 11:16:20.514916    2243 kapi.go:59] client config for ingress-addon-legacy-050000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key", CAFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065fc170), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 11:16:20.519293    2243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:16:20.522257    2243 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:16:20.522266    2243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 11:16:20.522276    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:16:20.526618    2243 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-050000"
	I0809 11:16:20.526637    2243 host.go:66] Checking if "ingress-addon-legacy-050000" exists ...
	I0809 11:16:20.527311    2243 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 11:16:20.527317    2243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 11:16:20.527328    2243 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/ingress-addon-legacy-050000/id_rsa Username:docker}
	I0809 11:16:20.529713    2243 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-050000" context rescaled to 1 replicas
	I0809 11:16:20.529731    2243 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:16:20.533206    2243 out.go:177] * Verifying Kubernetes components...
	I0809 11:16:20.541275    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 11:16:20.586265    2243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 11:16:20.594826    2243 kapi.go:59] client config for ingress-addon-legacy-050000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.key", CAFile:"/Users/jenkins/minikube-integration/17011-995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065fc170), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 11:16:20.594958    2243 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-050000" to be "Ready" ...
	I0809 11:16:20.596270    2243 node_ready.go:49] node "ingress-addon-legacy-050000" has status "Ready":"True"
	I0809 11:16:20.596276    2243 node_ready.go:38] duration metric: took 1.310542ms waiting for node "ingress-addon-legacy-050000" to be "Ready" ...
	I0809 11:16:20.596280    2243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:16:20.598888    2243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:20.602936    2243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 11:16:20.605435    2243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 11:16:20.910954    2243 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0809 11:16:20.915147    2243 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0809 11:16:20.924162    2243 addons.go:502] enable addons completed in 410.454125ms: enabled=[default-storageclass storage-provisioner]
	I0809 11:16:22.107341    2243 pod_ready.go:92] pod "etcd-ingress-addon-legacy-050000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:16:22.107354    2243 pod_ready.go:81] duration metric: took 1.508509333s waiting for pod "etcd-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:22.107361    2243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:22.618911    2243 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-050000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:16:22.618919    2243 pod_ready.go:81] duration metric: took 511.571458ms waiting for pod "kube-apiserver-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:22.618924    2243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:22.797006    2243 request.go:628] Waited for 176.012375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ingress-addon-legacy-050000
	I0809 11:16:22.997076    2243 request.go:628] Waited for 198.199292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-050000
	I0809 11:16:23.004148    2243 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-050000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:16:23.004182    2243 pod_ready.go:81] duration metric: took 385.262833ms waiting for pod "kube-controller-manager-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:23.004213    2243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-df24k" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:23.196959    2243 request.go:628] Waited for 192.6765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-df24k
	I0809 11:16:23.397012    2243 request.go:628] Waited for 196.692042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-050000
	I0809 11:16:23.403088    2243 pod_ready.go:92] pod "kube-proxy-df24k" in "kube-system" namespace has status "Ready":"True"
	I0809 11:16:23.403115    2243 pod_ready.go:81] duration metric: took 398.901042ms waiting for pod "kube-proxy-df24k" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:23.403134    2243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:23.595616    2243 request.go:628] Waited for 192.375042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-050000
	I0809 11:16:23.797025    2243 request.go:628] Waited for 193.469875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-050000
	I0809 11:16:23.805214    2243 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-050000" in "kube-system" namespace has status "Ready":"True"
	I0809 11:16:23.805250    2243 pod_ready.go:81] duration metric: took 402.106125ms waiting for pod "kube-scheduler-ingress-addon-legacy-050000" in "kube-system" namespace to be "Ready" ...
	I0809 11:16:23.805270    2243 pod_ready.go:38] duration metric: took 3.209089583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 11:16:23.805313    2243 api_server.go:52] waiting for apiserver process to appear ...
	I0809 11:16:23.805612    2243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 11:16:23.823167    2243 api_server.go:72] duration metric: took 3.293501958s to wait for apiserver process to appear ...
	I0809 11:16:23.823221    2243 api_server.go:88] waiting for apiserver healthz status ...
	I0809 11:16:23.823259    2243 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0809 11:16:23.832918    2243 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0809 11:16:23.833979    2243 api_server.go:141] control plane version: v1.18.20
	I0809 11:16:23.833995    2243 api_server.go:131] duration metric: took 10.765708ms to wait for apiserver health ...
	I0809 11:16:23.834002    2243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 11:16:23.996972    2243 request.go:628] Waited for 162.895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0809 11:16:24.016639    2243 system_pods.go:59] 7 kube-system pods found
	I0809 11:16:24.016695    2243 system_pods.go:61] "coredns-66bff467f8-gtz4c" [c3b00005-96fd-47be-a928-bac2d6a16bba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0809 11:16:24.016708    2243 system_pods.go:61] "etcd-ingress-addon-legacy-050000" [6145499c-2ac2-43d1-928c-21b0a891207e] Running
	I0809 11:16:24.016717    2243 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-050000" [87ecce31-aae5-4247-bc90-ecea1dff2f9b] Running
	I0809 11:16:24.016725    2243 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-050000" [9b97cd2d-8eb3-43da-a904-ed7762e5adff] Running
	I0809 11:16:24.016733    2243 system_pods.go:61] "kube-proxy-df24k" [f9658f76-f3bf-48c9-9e4f-ae4f06915acd] Running
	I0809 11:16:24.016743    2243 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-050000" [adfb0c93-0789-49ad-9c1a-ec6d23fa9f9e] Running
	I0809 11:16:24.016750    2243 system_pods.go:61] "storage-provisioner" [f4a6f9ee-eab4-497a-9705-f86d2defc5cf] Running
	I0809 11:16:24.016759    2243 system_pods.go:74] duration metric: took 182.755833ms to wait for pod list to return data ...
	I0809 11:16:24.016770    2243 default_sa.go:34] waiting for default service account to be created ...
	I0809 11:16:24.197057    2243 request.go:628] Waited for 180.098542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0809 11:16:24.203868    2243 default_sa.go:45] found service account: "default"
	I0809 11:16:24.203903    2243 default_sa.go:55] duration metric: took 187.130833ms for default service account to be created ...
	I0809 11:16:24.203930    2243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 11:16:24.396928    2243 request.go:628] Waited for 192.906208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0809 11:16:24.408088    2243 system_pods.go:86] 7 kube-system pods found
	I0809 11:16:24.408128    2243 system_pods.go:89] "coredns-66bff467f8-gtz4c" [c3b00005-96fd-47be-a928-bac2d6a16bba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0809 11:16:24.408139    2243 system_pods.go:89] "etcd-ingress-addon-legacy-050000" [6145499c-2ac2-43d1-928c-21b0a891207e] Running
	I0809 11:16:24.408156    2243 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-050000" [87ecce31-aae5-4247-bc90-ecea1dff2f9b] Running
	I0809 11:16:24.408167    2243 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-050000" [9b97cd2d-8eb3-43da-a904-ed7762e5adff] Running
	I0809 11:16:24.408176    2243 system_pods.go:89] "kube-proxy-df24k" [f9658f76-f3bf-48c9-9e4f-ae4f06915acd] Running
	I0809 11:16:24.408183    2243 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-050000" [adfb0c93-0789-49ad-9c1a-ec6d23fa9f9e] Running
	I0809 11:16:24.408202    2243 system_pods.go:89] "storage-provisioner" [f4a6f9ee-eab4-497a-9705-f86d2defc5cf] Running
	I0809 11:16:24.408222    2243 system_pods.go:126] duration metric: took 204.290459ms to wait for k8s-apps to be running ...
	I0809 11:16:24.408238    2243 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 11:16:24.408485    2243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 11:16:24.422740    2243 system_svc.go:56] duration metric: took 14.499875ms WaitForService to wait for kubelet.
	I0809 11:16:24.422757    2243 kubeadm.go:581] duration metric: took 3.893144583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 11:16:24.422777    2243 node_conditions.go:102] verifying NodePressure condition ...
	I0809 11:16:24.596951    2243 request.go:628] Waited for 174.09775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0809 11:16:24.605631    2243 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0809 11:16:24.605689    2243 node_conditions.go:123] node cpu capacity is 2
	I0809 11:16:24.605719    2243 node_conditions.go:105] duration metric: took 182.938ms to run NodePressure ...
	I0809 11:16:24.605746    2243 start.go:228] waiting for startup goroutines ...
	I0809 11:16:24.605761    2243 start.go:233] waiting for cluster config update ...
	I0809 11:16:24.605797    2243 start.go:242] writing updated cluster config ...
	I0809 11:16:24.607009    2243 ssh_runner.go:195] Run: rm -f paused
	I0809 11:16:24.669426    2243 start.go:599] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0809 11:16:24.673097    2243 out.go:177] 
	W0809 11:16:24.674483    2243 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0809 11:16:24.678912    2243 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0809 11:16:24.685970    2243 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-050000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-08-09 18:15:38 UTC, ends at Wed 2023-08-09 18:17:34 UTC. --
	Aug 09 18:17:10 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:10.030364135Z" level=warning msg="cleaning up after shim disconnected" id=fae1900eed3261dcd06243035b2da8a262af399ce3b67e0847822a788b8a8a6c namespace=moby
	Aug 09 18:17:10 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:10.030368469Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.265085264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.265133681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.265184973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.265199514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:23.316019992Z" level=info msg="ignoring event" container=3c3ddb571c0b51f61326d0732debff1d4044d95b12ae9cc76c0e59ca1495a7a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.316624575Z" level=info msg="shim disconnected" id=3c3ddb571c0b51f61326d0732debff1d4044d95b12ae9cc76c0e59ca1495a7a9 namespace=moby
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.316658283Z" level=warning msg="cleaning up after shim disconnected" id=3c3ddb571c0b51f61326d0732debff1d4044d95b12ae9cc76c0e59ca1495a7a9 namespace=moby
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.316663450Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:17:23 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:23.321465119Z" level=warning msg="cleanup warnings time=\"2023-08-09T18:17:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 09 18:17:24 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:24.231297163Z" level=info msg="ignoring event" container=21aed973fc780e01e64852460a6077da48d7ba31640cea412799f9ed1faa732d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:17:24 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:24.231576413Z" level=info msg="shim disconnected" id=21aed973fc780e01e64852460a6077da48d7ba31640cea412799f9ed1faa732d namespace=moby
	Aug 09 18:17:24 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:24.231614830Z" level=warning msg="cleaning up after shim disconnected" id=21aed973fc780e01e64852460a6077da48d7ba31640cea412799f9ed1faa732d namespace=moby
	Aug 09 18:17:24 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:24.231622663Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:29.716685830Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=617310c1c643ec0667ce360ad38381346ec690571c1aceb221255aa0f29b2bc2
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:29.729233709Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=617310c1c643ec0667ce360ad38381346ec690571c1aceb221255aa0f29b2bc2
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:29.826108949Z" level=info msg="ignoring event" container=617310c1c643ec0667ce360ad38381346ec690571c1aceb221255aa0f29b2bc2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.826462116Z" level=info msg="shim disconnected" id=617310c1c643ec0667ce360ad38381346ec690571c1aceb221255aa0f29b2bc2 namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.826986033Z" level=warning msg="cleaning up after shim disconnected" id=617310c1c643ec0667ce360ad38381346ec690571c1aceb221255aa0f29b2bc2 namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.826997241Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1079]: time="2023-08-09T18:17:29.864512961Z" level=info msg="ignoring event" container=c694ac10dee8c1a1546ac7da122ecabd1c6fd3e5e326298232e24c69675edbb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.864699586Z" level=info msg="shim disconnected" id=c694ac10dee8c1a1546ac7da122ecabd1c6fd3e5e326298232e24c69675edbb1 namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.864784253Z" level=warning msg="cleaning up after shim disconnected" id=c694ac10dee8c1a1546ac7da122ecabd1c6fd3e5e326298232e24c69675edbb1 namespace=moby
	Aug 09 18:17:29 ingress-addon-legacy-050000 dockerd[1085]: time="2023-08-09T18:17:29.864795795Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	3c3ddb571c0b5       13753a81eccfd                                                                                                      11 seconds ago       Exited              hello-world-app           2                   a69c36e0a1555
	28639cdbecd98       nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c                                      33 seconds ago       Running             nginx                     0                   da48c0c358166
	617310c1c643e       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   56 seconds ago       Exited              controller                0                   c694ac10dee8c
	c153098926edb       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   795318b85d4a7
	7df80b8c48410       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   29167ef0abf01
	00378453b4fec       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   5f4471e614bc1
	e327dcb7c66e5       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   031a69d13a040
	a70b1c0bc37b3       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   f02e6242777e0
	940eaf1287a53       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   7b694ca368a92
	5749c5d884974       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   f95cc6dae9508
	f275c533d168b       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   c1d925a8be9e7
	bba7156bcea3d       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   d9b1ededcd229
	
	* 
	* ==> coredns [e327dcb7c66e] <==
	* [INFO] 172.17.0.1:22508 - 40478 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005s
	[INFO] 172.17.0.1:22508 - 19657 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023291s
	[INFO] 172.17.0.1:22508 - 1575 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023459s
	[INFO] 172.17.0.1:9956 - 44703 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001975s
	[INFO] 172.17.0.1:9956 - 50642 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022833s
	[INFO] 172.17.0.1:22508 - 22825 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071084s
	[INFO] 172.17.0.1:9956 - 9433 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009167s
	[INFO] 172.17.0.1:22508 - 17947 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005825s
	[INFO] 172.17.0.1:9956 - 20709 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008542s
	[INFO] 172.17.0.1:9956 - 21316 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002775s
	[INFO] 172.17.0.1:9956 - 16528 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010583s
	[INFO] 172.17.0.1:60036 - 28740 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000072125s
	[INFO] 172.17.0.1:30130 - 18420 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00009375s
	[INFO] 172.17.0.1:60036 - 27947 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011916s
	[INFO] 172.17.0.1:30130 - 4582 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001125s
	[INFO] 172.17.0.1:60036 - 40083 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040167s
	[INFO] 172.17.0.1:60036 - 46352 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014125s
	[INFO] 172.17.0.1:60036 - 44950 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013292s
	[INFO] 172.17.0.1:30130 - 62033 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038208s
	[INFO] 172.17.0.1:60036 - 37273 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032584s
	[INFO] 172.17.0.1:30130 - 23774 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020875s
	[INFO] 172.17.0.1:60036 - 46329 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000165s
	[INFO] 172.17.0.1:30130 - 22863 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036875s
	[INFO] 172.17.0.1:30130 - 11009 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009208s
	[INFO] 172.17.0.1:30130 - 3401 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.001267709s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-050000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-050000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=ingress-addon-legacy-050000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T11_16_05_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-050000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:17:12 +0000   Wed, 09 Aug 2023 18:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:17:12 +0000   Wed, 09 Aug 2023 18:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:17:12 +0000   Wed, 09 Aug 2023 18:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:17:12 +0000   Wed, 09 Aug 2023 18:16:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-050000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c879b25d7214498a8ba7ac7588d5b43
	  System UUID:                1c879b25d7214498a8ba7ac7588d5b43
	  Boot ID:                    4dbc37d4-bc9f-40cd-9953-da8b37ca9d9e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-qh6nw                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-66bff467f8-gtz4c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     73s
	  kube-system                 etcd-ingress-addon-legacy-050000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-ingress-addon-legacy-050000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-050000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-df24k                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-ingress-addon-legacy-050000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  95s (x5 over 95s)  kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x5 over 95s)  kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x4 over 95s)  kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasSufficientPID
	  Normal  Starting                 82s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s                kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet     Node ingress-addon-legacy-050000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s                kubelet     Node ingress-addon-legacy-050000 status is now: NodeReady
	  Normal  Starting                 73s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug 9 18:15] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662094] EINJ: EINJ table not found.
	[  +0.518084] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.045400] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000795] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.197852] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.079818] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.478444] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.154196] systemd-fstab-generator[824]: Ignoring "noauto" for root device
	[  +0.078920] systemd-fstab-generator[835]: Ignoring "noauto" for root device
	[  +0.087987] systemd-fstab-generator[848]: Ignoring "noauto" for root device
	[  +4.328132] systemd-fstab-generator[1052]: Ignoring "noauto" for root device
	[  +1.507848] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.624369] systemd-fstab-generator[1524]: Ignoring "noauto" for root device
	[  +7.654961] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.089249] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug 9 18:16] systemd-fstab-generator[2611]: Ignoring "noauto" for root device
	[ +15.638153] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.938056] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.527716] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +28.344792] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 9 18:17] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [940eaf1287a5] <==
	* raft2023/08/09 18:16:00 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/08/09 18:16:00 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/09 18:16:00 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/08/09 18:16:00 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-09 18:16:00.364483 W | auth: simple token is not cryptographically signed
	2023-08-09 18:16:00.365327 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-09 18:16:00.366221 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-09 18:16:00.366302 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-09 18:16:00.366456 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-09 18:16:00.366587 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/08/09 18:16:00 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-08-09 18:16:00.366794 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/08/09 18:16:01 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/08/09 18:16:01 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/08/09 18:16:01 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/08/09 18:16:01 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/08/09 18:16:01 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-08-09 18:16:01.259993 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-09 18:16:01.261595 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-09 18:16:01.261792 I | etcdserver: published {Name:ingress-addon-legacy-050000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-08-09 18:16:01.262078 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-09 18:16:01.262201 I | embed: ready to serve client requests
	2023-08-09 18:16:01.262332 I | embed: ready to serve client requests
	2023-08-09 18:16:01.265649 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-09 18:16:01.266112 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  18:17:34 up 1 min,  0 users,  load average: 0.52, 0.20, 0.07
	Linux ingress-addon-legacy-050000 5.10.57 #1 SMP PREEMPT Mon Jul 31 23:05:09 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5749c5d88497] <==
	* I0809 18:16:02.824096       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0809 18:16:02.824139       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0809 18:16:02.904056       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0809 18:16:02.904073       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 18:16:02.904103       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0809 18:16:02.904115       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 18:16:02.904120       1 cache.go:39] Caches are synced for autoregister controller
	I0809 18:16:03.805742       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0809 18:16:03.806103       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 18:16:03.812420       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0809 18:16:03.818183       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0809 18:16:03.818223       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0809 18:16:03.988596       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 18:16:03.999720       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0809 18:16:04.074706       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0809 18:16:04.075065       1 controller.go:609] quota admission added evaluator for: endpoints
	I0809 18:16:04.078642       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0809 18:16:05.103368       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0809 18:16:05.696480       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0809 18:16:05.886909       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0809 18:16:12.132733       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 18:16:20.706425       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0809 18:16:21.007038       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0809 18:16:25.154646       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0809 18:16:58.163473       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [f275c533d168] <==
	* I0809 18:16:21.010503       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e7ebe4b3-731c-49d3-ba69-574aac65ff62", APIVersion:"apps/v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0809 18:16:21.013627       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dbb4a8dc-8231-40db-baae-d916a6e05e0d", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-gtz4c
	I0809 18:16:21.040855       1 shared_informer.go:230] Caches are synced for endpoint 
	I0809 18:16:21.085933       1 shared_informer.go:230] Caches are synced for disruption 
	I0809 18:16:21.085945       1 disruption.go:339] Sending events to api server.
	I0809 18:16:21.194276       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0809 18:16:21.195675       1 shared_informer.go:230] Caches are synced for resource quota 
	I0809 18:16:21.201667       1 shared_informer.go:230] Caches are synced for expand 
	I0809 18:16:21.202032       1 shared_informer.go:230] Caches are synced for resource quota 
	I0809 18:16:21.232478       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0809 18:16:21.256403       1 shared_informer.go:230] Caches are synced for attach detach 
	I0809 18:16:21.261738       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0809 18:16:21.261746       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0809 18:16:21.284972       1 shared_informer.go:230] Caches are synced for stateful set 
	I0809 18:16:21.606278       1 request.go:621] Throttling request took 1.041423723s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0809 18:16:22.059138       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0809 18:16:22.059320       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0809 18:16:25.156272       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ca219ee0-8d40-41b2-8f01-c874286de58e", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0809 18:16:25.161697       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6938a494-6cee-450d-92d4-eedd5ef98b9b", APIVersion:"batch/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gst84
	I0809 18:16:25.164515       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"fd7cee16-5fa1-47a1-9148-82b531819343", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-pxj69
	I0809 18:16:25.183707       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"88b5d4e9-d321-433b-830e-d222e4c41176", APIVersion:"batch/v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-744nw
	I0809 18:16:29.466671       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6938a494-6cee-450d-92d4-eedd5ef98b9b", APIVersion:"batch/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0809 18:16:29.476111       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"88b5d4e9-d321-433b-830e-d222e4c41176", APIVersion:"batch/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0809 18:17:07.447660       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9fb78930-4661-41f3-8f50-c8e6207110f7", APIVersion:"apps/v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0809 18:17:07.460246       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8ff29945-bf56-4d27-bacb-224d713ab233", APIVersion:"apps/v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-qh6nw
	
	* 
	* ==> kube-proxy [a70b1c0bc37b] <==
	* W0809 18:16:21.216423       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0809 18:16:21.220585       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0809 18:16:21.220601       1 server_others.go:186] Using iptables Proxier.
	I0809 18:16:21.220728       1 server.go:583] Version: v1.18.20
	I0809 18:16:21.221139       1 config.go:315] Starting service config controller
	I0809 18:16:21.221149       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0809 18:16:21.221590       1 config.go:133] Starting endpoints config controller
	I0809 18:16:21.221593       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0809 18:16:21.321272       1 shared_informer.go:230] Caches are synced for service config 
	I0809 18:16:21.321675       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [bba7156bcea3] <==
	* I0809 18:16:00.565811       1 serving.go:313] Generated self-signed cert in-memory
	W0809 18:16:02.843924       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0809 18:16:02.843939       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0809 18:16:02.843944       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0809 18:16:02.843947       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0809 18:16:02.860116       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0809 18:16:02.860129       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0809 18:16:02.861978       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0809 18:16:02.862080       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:16:02.862130       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:16:02.862175       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0809 18:16:02.864542       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0809 18:16:02.864632       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:16:02.864688       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 18:16:02.864764       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0809 18:16:02.864848       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 18:16:02.864894       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:16:02.865049       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:16:02.865112       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 18:16:02.865147       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0809 18:16:02.865209       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0809 18:16:02.865244       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:16:02.865396       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 18:16:03.893547       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0809 18:16:04.462288       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-08-09 18:15:38 UTC, ends at Wed 2023-08-09 18:17:34 UTC. --
	Aug 09 18:17:11 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:11.984298    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fae1900eed3261dcd06243035b2da8a262af399ce3b67e0847822a788b8a8a6c
	Aug 09 18:17:11 ingress-addon-legacy-050000 kubelet[2617]: E0809 18:17:11.984835    2617 pod_workers.go:191] Error syncing pod 6bdd4ad3-6e54-4895-9a82-8468969ee1a5 ("hello-world-app-5f5d8b66bb-qh6nw_default(6bdd4ad3-6e54-4895-9a82-8468969ee1a5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-qh6nw_default(6bdd4ad3-6e54-4895-9a82-8468969ee1a5)"
	Aug 09 18:17:12 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:12.198878    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbd10d8fef26aba055bfbef3bb820b20a06095429ce9436dec94a27ac0d7e69e
	Aug 09 18:17:12 ingress-addon-legacy-050000 kubelet[2617]: E0809 18:17:12.199203    2617 pod_workers.go:191] Error syncing pod 76a4944d-efa3-4102-94cc-dc3b37fc5635 ("kube-ingress-dns-minikube_kube-system(76a4944d-efa3-4102-94cc-dc3b37fc5635)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(76a4944d-efa3-4102-94cc-dc3b37fc5635)"
	Aug 09 18:17:22 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:22.846996    2617 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-gzdpc" (UniqueName: "kubernetes.io/secret/76a4944d-efa3-4102-94cc-dc3b37fc5635-minikube-ingress-dns-token-gzdpc") pod "76a4944d-efa3-4102-94cc-dc3b37fc5635" (UID: "76a4944d-efa3-4102-94cc-dc3b37fc5635")
	Aug 09 18:17:22 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:22.850968    2617 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a4944d-efa3-4102-94cc-dc3b37fc5635-minikube-ingress-dns-token-gzdpc" (OuterVolumeSpecName: "minikube-ingress-dns-token-gzdpc") pod "76a4944d-efa3-4102-94cc-dc3b37fc5635" (UID: "76a4944d-efa3-4102-94cc-dc3b37fc5635"). InnerVolumeSpecName "minikube-ingress-dns-token-gzdpc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:17:22 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:22.948482    2617 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-gzdpc" (UniqueName: "kubernetes.io/secret/76a4944d-efa3-4102-94cc-dc3b37fc5635-minikube-ingress-dns-token-gzdpc") on node "ingress-addon-legacy-050000" DevicePath ""
	Aug 09 18:17:23 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:23.206591    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fae1900eed3261dcd06243035b2da8a262af399ce3b67e0847822a788b8a8a6c
	Aug 09 18:17:23 ingress-addon-legacy-050000 kubelet[2617]: W0809 18:17:23.329318    2617 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod6bdd4ad3-6e54-4895-9a82-8468969ee1a5/3c3ddb571c0b51f61326d0732debff1d4044d95b12ae9cc76c0e59ca1495a7a9": none of the resources are being tracked.
	Aug 09 18:17:24 ingress-addon-legacy-050000 kubelet[2617]: W0809 18:17:24.186829    2617 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-qh6nw through plugin: invalid network status for
	Aug 09 18:17:24 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:24.193782    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fae1900eed3261dcd06243035b2da8a262af399ce3b67e0847822a788b8a8a6c
	Aug 09 18:17:24 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:24.194435    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c3ddb571c0b51f61326d0732debff1d4044d95b12ae9cc76c0e59ca1495a7a9
	Aug 09 18:17:24 ingress-addon-legacy-050000 kubelet[2617]: E0809 18:17:24.195266    2617 pod_workers.go:191] Error syncing pod 6bdd4ad3-6e54-4895-9a82-8468969ee1a5 ("hello-world-app-5f5d8b66bb-qh6nw_default(6bdd4ad3-6e54-4895-9a82-8468969ee1a5)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-qh6nw_default(6bdd4ad3-6e54-4895-9a82-8468969ee1a5)"
	Aug 09 18:17:25 ingress-addon-legacy-050000 kubelet[2617]: W0809 18:17:25.203266    2617 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-qh6nw through plugin: invalid network status for
	Aug 09 18:17:25 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:25.208956    2617 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dbd10d8fef26aba055bfbef3bb820b20a06095429ce9436dec94a27ac0d7e69e
	Aug 09 18:17:27 ingress-addon-legacy-050000 kubelet[2617]: E0809 18:17:27.711331    2617 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pxj69.1779c9d410f9d19c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pxj69", UID:"4bbd9da9-32ba-4a76-9f6d-2c2320eb4714", APIVersion:"v1", ResourceVersion:"430", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-050000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12d13edea486b9c, ext:82035087200, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12d13edea486b9c, ext:82035087200, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pxj69.1779c9d410f9d19c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 09 18:17:27 ingress-addon-legacy-050000 kubelet[2617]: E0809 18:17:27.717986    2617 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pxj69.1779c9d410f9d19c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pxj69", UID:"4bbd9da9-32ba-4a76-9f6d-2c2320eb4714", APIVersion:"v1", ResourceVersion:"430", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-050000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12d13edea486b9c, ext:82035087200, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12d13edea7f4bf4, ext:82038683576, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pxj69.1779c9d410f9d19c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 09 18:17:30 ingress-addon-legacy-050000 kubelet[2617]: W0809 18:17:30.284695    2617 pod_container_deletor.go:77] Container "c694ac10dee8c1a1546ac7da122ecabd1c6fd3e5e326298232e24c69675edbb1" not found in pod's containers
	Aug 09 18:17:31 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:31.958867    2617 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-webhook-cert") pod "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714" (UID: "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714")
	Aug 09 18:17:31 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:31.958978    2617 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-mz9db" (UniqueName: "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-ingress-nginx-token-mz9db") pod "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714" (UID: "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714")
	Aug 09 18:17:31 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:31.968651    2617 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714" (UID: "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:17:31 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:31.969102    2617 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-ingress-nginx-token-mz9db" (OuterVolumeSpecName: "ingress-nginx-token-mz9db") pod "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714" (UID: "4bbd9da9-32ba-4a76-9f6d-2c2320eb4714"). InnerVolumeSpecName "ingress-nginx-token-mz9db". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:17:32 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:32.059592    2617 reconciler.go:319] Volume detached for volume "ingress-nginx-token-mz9db" (UniqueName: "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-ingress-nginx-token-mz9db") on node "ingress-addon-legacy-050000" DevicePath ""
	Aug 09 18:17:32 ingress-addon-legacy-050000 kubelet[2617]: I0809 18:17:32.059673    2617 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714-webhook-cert") on node "ingress-addon-legacy-050000" DevicePath ""
	Aug 09 18:17:32 ingress-addon-legacy-050000 kubelet[2617]: W0809 18:17:32.221683    2617 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/4bbd9da9-32ba-4a76-9f6d-2c2320eb4714/volumes" does not exist
	
	* 
	* ==> storage-provisioner [00378453b4fe] <==
	* I0809 18:16:22.908712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0809 18:16:22.912866       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0809 18:16:22.912920       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0809 18:16:22.915763       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0809 18:16:22.915988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ca6880d-266f-42c7-8a97-7e44070cf399", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-050000_bece5030-5182-4537-92cc-fb3cc41e0292 became leader
	I0809 18:16:22.917027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-050000_bece5030-5182-4537-92cc-fb3cc41e0292!
	I0809 18:16:23.018007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-050000_bece5030-5182-4537-92cc-fb3cc41e0292!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-050000 -n ingress-addon-legacy-050000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-050000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-603000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-603000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.182144542s)

                                                
                                                
-- stdout --
	* [mount-start-1-603000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-603000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-603000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-603000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-603000 -n mount-start-1-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-603000 -n mount-start-1-603000: exit status 7 (67.625625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-305000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-305000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.445659125s)

                                                
                                                
-- stdout --
	* [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-305000 in cluster multinode-305000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:19:47.631327    2593 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:19:47.631431    2593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:19:47.631435    2593 out.go:309] Setting ErrFile to fd 2...
	I0809 11:19:47.631437    2593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:19:47.631539    2593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:19:47.632587    2593 out.go:303] Setting JSON to false
	I0809 11:19:47.647570    2593 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1161,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:19:47.647644    2593 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:19:47.652552    2593 out.go:177] * [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:19:47.659565    2593 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:19:47.659610    2593 notify.go:220] Checking for updates...
	I0809 11:19:47.666503    2593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:19:47.669629    2593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:19:47.673509    2593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:19:47.676557    2593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:19:47.679524    2593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:19:47.682649    2593 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:19:47.686470    2593 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:19:47.693526    2593 start.go:298] selected driver: qemu2
	I0809 11:19:47.693531    2593 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:19:47.693538    2593 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:19:47.695481    2593 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:19:47.699475    2593 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:19:47.702957    2593 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:19:47.702979    2593 cni.go:84] Creating CNI manager for ""
	I0809 11:19:47.702982    2593 cni.go:136] 0 nodes found, recommending kindnet
	I0809 11:19:47.702989    2593 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 11:19:47.702995    2593 start_flags.go:319] config:
	{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0}
	I0809 11:19:47.707304    2593 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:19:47.714576    2593 out.go:177] * Starting control plane node multinode-305000 in cluster multinode-305000
	I0809 11:19:47.718324    2593 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:19:47.718351    2593 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:19:47.718361    2593 cache.go:57] Caching tarball of preloaded images
	I0809 11:19:47.718405    2593 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:19:47.718410    2593 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:19:47.718582    2593 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/multinode-305000/config.json ...
	I0809 11:19:47.718595    2593 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/multinode-305000/config.json: {Name:mka4e0544cf86b11995e645fcc19294cacb0e97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:19:47.718793    2593 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:19:47.718821    2593 start.go:369] acquired machines lock for "multinode-305000" in 23.416µs
	I0809 11:19:47.718831    2593 start.go:93] Provisioning new machine with config: &{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:19:47.718862    2593 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:19:47.727513    2593 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:19:47.742660    2593 start.go:159] libmachine.API.Create for "multinode-305000" (driver="qemu2")
	I0809 11:19:47.742679    2593 client.go:168] LocalClient.Create starting
	I0809 11:19:47.742731    2593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:19:47.742762    2593 main.go:141] libmachine: Decoding PEM data...
	I0809 11:19:47.742773    2593 main.go:141] libmachine: Parsing certificate...
	I0809 11:19:47.742812    2593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:19:47.742831    2593 main.go:141] libmachine: Decoding PEM data...
	I0809 11:19:47.742844    2593 main.go:141] libmachine: Parsing certificate...
	I0809 11:19:47.743187    2593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:19:48.087026    2593 main.go:141] libmachine: Creating SSH key...
	I0809 11:19:48.307282    2593 main.go:141] libmachine: Creating Disk image...
	I0809 11:19:48.307291    2593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:19:48.307495    2593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:48.316659    2593 main.go:141] libmachine: STDOUT: 
	I0809 11:19:48.316674    2593 main.go:141] libmachine: STDERR: 
	I0809 11:19:48.316742    2593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2 +20000M
	I0809 11:19:48.324039    2593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:19:48.324051    2593 main.go:141] libmachine: STDERR: 
	I0809 11:19:48.324067    2593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:48.324075    2593 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:19:48.324110    2593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:46:f8:5c:7f:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:48.325639    2593 main.go:141] libmachine: STDOUT: 
	I0809 11:19:48.325652    2593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:19:48.325670    2593 client.go:171] LocalClient.Create took 583.006208ms
	I0809 11:19:50.327781    2593 start.go:128] duration metric: createHost completed in 2.608986208s
	I0809 11:19:50.327884    2593 start.go:83] releasing machines lock for "multinode-305000", held for 2.609107958s
	W0809 11:19:50.327946    2593 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:19:50.338217    2593 out.go:177] * Deleting "multinode-305000" in qemu2 ...
	W0809 11:19:50.357539    2593 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:19:50.357570    2593 start.go:687] Will try again in 5 seconds ...
	I0809 11:19:55.359659    2593 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:19:55.360136    2593 start.go:369] acquired machines lock for "multinode-305000" in 359.5µs
	I0809 11:19:55.360265    2593 start.go:93] Provisioning new machine with config: &{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:19:55.360556    2593 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:19:55.370158    2593 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:19:55.419246    2593 start.go:159] libmachine.API.Create for "multinode-305000" (driver="qemu2")
	I0809 11:19:55.419298    2593 client.go:168] LocalClient.Create starting
	I0809 11:19:55.419425    2593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:19:55.419496    2593 main.go:141] libmachine: Decoding PEM data...
	I0809 11:19:55.419514    2593 main.go:141] libmachine: Parsing certificate...
	I0809 11:19:55.419594    2593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:19:55.419633    2593 main.go:141] libmachine: Decoding PEM data...
	I0809 11:19:55.419651    2593 main.go:141] libmachine: Parsing certificate...
	I0809 11:19:55.420253    2593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:19:55.806019    2593 main.go:141] libmachine: Creating SSH key...
	I0809 11:19:55.992494    2593 main.go:141] libmachine: Creating Disk image...
	I0809 11:19:55.992504    2593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:19:55.992652    2593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:56.001285    2593 main.go:141] libmachine: STDOUT: 
	I0809 11:19:56.001299    2593 main.go:141] libmachine: STDERR: 
	I0809 11:19:56.001347    2593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2 +20000M
	I0809 11:19:56.008599    2593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:19:56.008609    2593 main.go:141] libmachine: STDERR: 
	I0809 11:19:56.008621    2593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:56.008626    2593 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:19:56.008673    2593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:36:ab:5f:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:19:56.010164    2593 main.go:141] libmachine: STDOUT: 
	I0809 11:19:56.010175    2593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:19:56.010187    2593 client.go:171] LocalClient.Create took 590.902625ms
	I0809 11:19:58.012279    2593 start.go:128] duration metric: createHost completed in 2.651779959s
	I0809 11:19:58.012369    2593 start.go:83] releasing machines lock for "multinode-305000", held for 2.652298666s
	W0809 11:19:58.012738    2593 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:19:58.022511    2593 out.go:177] 
	W0809 11:19:58.026614    2593 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:19:58.026660    2593 out.go:239] * 
	* 
	W0809 11:19:58.028955    2593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:19:58.036503    2593 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-305000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (70.486625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (105.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.338709ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-305000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- rollout status deployment/busybox: exit status 1 (53.656958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.835917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.673791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.625666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.728792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0809 11:20:04.146970    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.938792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.634708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.313625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.922209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.6735ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.44425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0809 11:21:26.066779    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:21:39.850025    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:39.856398    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:39.868467    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:39.890531    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:39.932611    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:40.014695    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:40.176838    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:40.498953    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:41.141210    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
E0809 11:21:42.423478    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.555333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.810417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.399209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.13475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.155417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.515541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (105.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-305000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.466125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.362208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-305000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-305000 -v 3 --alsologtostderr: exit status 89 (39.617208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-305000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:44.078398    2696 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:44.078598    2696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.078601    2696 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:44.078603    2696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.078718    2696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:44.078931    2696 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:44.079100    2696 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:44.083101    2696 out.go:177] * The control plane node must be running for this command
	I0809 11:21:44.087166    2696 out.go:177]   To start a cluster, run: "minikube start -p multinode-305000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-305000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.434667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-305000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-305000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-305000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.4\",\"ClusterName\":\"multinode-305000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (31.475083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status --output json --alsologtostderr: exit status 7 (28.666417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-305000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:44.311731    2706 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:44.311856    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.311861    2706 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:44.311863    2706 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.311987    2706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:44.312089    2706 out.go:303] Setting JSON to true
	I0809 11:21:44.312101    2706 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:44.312164    2706 notify.go:220] Checking for updates...
	I0809 11:21:44.312264    2706 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:44.312268    2706 status.go:255] checking status of multinode-305000 ...
	I0809 11:21:44.312451    2706 status.go:330] multinode-305000 host status = "Stopped" (err=<nil>)
	I0809 11:21:44.312455    2706 status.go:343] host is not running, skipping remaining checks
	I0809 11:21:44.312457    2706 status.go:257] multinode-305000 status: &{Name:multinode-305000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-305000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.053291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 node stop m03: exit status 85 (46.428083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-305000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status: exit status 7 (28.035333ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr: exit status 7 (28.241833ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:44.443334    2714 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:44.443484    2714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.443486    2714 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:44.443489    2714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.443596    2714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:44.443701    2714 out.go:303] Setting JSON to false
	I0809 11:21:44.443712    2714 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:44.443770    2714 notify.go:220] Checking for updates...
	I0809 11:21:44.443878    2714 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:44.443882    2714 status.go:255] checking status of multinode-305000 ...
	I0809 11:21:44.444103    2714 status.go:330] multinode-305000 host status = "Stopped" (err=<nil>)
	I0809 11:21:44.444106    2714 status.go:343] host is not running, skipping remaining checks
	I0809 11:21:44.444108    2714 status.go:257] multinode-305000 status: &{Name:multinode-305000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr": multinode-305000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.191375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 node start m03 --alsologtostderr: exit status 85 (44.004958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:44.500250    2718 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:44.500447    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.500449    2718 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:44.500452    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.500560    2718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:44.500776    2718 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:44.500942    2718 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:44.504687    2718 out.go:177] 
	W0809 11:21:44.507675    2718 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0809 11:21:44.507680    2718 out.go:239] * 
	* 
	W0809 11:21:44.509251    2718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:21:44.512497    2718 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0809 11:21:44.500250    2718 out.go:296] Setting OutFile to fd 1 ...
I0809 11:21:44.500447    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:21:44.500449    2718 out.go:309] Setting ErrFile to fd 2...
I0809 11:21:44.500452    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:21:44.500560    2718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:21:44.500776    2718 mustload.go:65] Loading cluster: multinode-305000
I0809 11:21:44.500942    2718 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:21:44.504687    2718 out.go:177] 
W0809 11:21:44.507675    2718 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0809 11:21:44.507680    2718 out.go:239] * 
* 
W0809 11:21:44.509251    2718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0809 11:21:44.512497    2718 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-305000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status: exit status 7 (28.848ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-305000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.053375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-305000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-305000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-305000 --wait=true -v=8 --alsologtostderr
E0809 11:21:44.986114    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-305000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.1783145s)

                                                
                                                
-- stdout --
	* [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-305000 in cluster multinode-305000
	* Restarting existing qemu2 VM for "multinode-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:44.688281    2728 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:44.688402    2728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.688404    2728 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:44.688407    2728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:44.688527    2728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:44.689454    2728 out.go:303] Setting JSON to false
	I0809 11:21:44.704626    2728 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1278,"bootTime":1691604026,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:21:44.704695    2728 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:21:44.709651    2728 out.go:177] * [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:21:44.716659    2728 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:21:44.716728    2728 notify.go:220] Checking for updates...
	I0809 11:21:44.720421    2728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:21:44.723633    2728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:21:44.726653    2728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:21:44.729610    2728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:21:44.732631    2728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:21:44.735844    2728 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:44.735884    2728 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:21:44.740663    2728 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:21:44.747589    2728 start.go:298] selected driver: qemu2
	I0809 11:21:44.747596    2728 start.go:901] validating driver "qemu2" against &{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:21:44.747664    2728 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:21:44.749529    2728 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:21:44.749584    2728 cni.go:84] Creating CNI manager for ""
	I0809 11:21:44.749589    2728 cni.go:136] 1 nodes found, recommending kindnet
	I0809 11:21:44.749593    2728 start_flags.go:319] config:
	{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:21:44.753395    2728 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:21:44.761576    2728 out.go:177] * Starting control plane node multinode-305000 in cluster multinode-305000
	I0809 11:21:44.765612    2728 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:21:44.765630    2728 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:21:44.765639    2728 cache.go:57] Caching tarball of preloaded images
	I0809 11:21:44.765698    2728 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:21:44.765703    2728 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:21:44.765762    2728 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/multinode-305000/config.json ...
	I0809 11:21:44.766102    2728 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:21:44.766130    2728 start.go:369] acquired machines lock for "multinode-305000" in 22.917µs
	I0809 11:21:44.766139    2728 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:21:44.766144    2728 fix.go:54] fixHost starting: 
	I0809 11:21:44.766257    2728 fix.go:102] recreateIfNeeded on multinode-305000: state=Stopped err=<nil>
	W0809 11:21:44.766265    2728 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:21:44.774598    2728 out.go:177] * Restarting existing qemu2 VM for "multinode-305000" ...
	I0809 11:21:44.778610    2728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:36:ab:5f:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:21:44.780453    2728 main.go:141] libmachine: STDOUT: 
	I0809 11:21:44.780472    2728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:21:44.780510    2728 fix.go:56] fixHost completed within 14.36675ms
	I0809 11:21:44.780515    2728 start.go:83] releasing machines lock for "multinode-305000", held for 14.381708ms
	W0809 11:21:44.780523    2728 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:21:44.780562    2728 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:21:44.780567    2728 start.go:687] Will try again in 5 seconds ...
	I0809 11:21:49.782570    2728 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:21:49.782900    2728 start.go:369] acquired machines lock for "multinode-305000" in 257.208µs
	I0809 11:21:49.783011    2728 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:21:49.783031    2728 fix.go:54] fixHost starting: 
	I0809 11:21:49.783737    2728 fix.go:102] recreateIfNeeded on multinode-305000: state=Stopped err=<nil>
	W0809 11:21:49.783761    2728 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:21:49.788152    2728 out.go:177] * Restarting existing qemu2 VM for "multinode-305000" ...
	I0809 11:21:49.796300    2728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:36:ab:5f:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:21:49.804333    2728 main.go:141] libmachine: STDOUT: 
	I0809 11:21:49.804379    2728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:21:49.804442    2728 fix.go:56] fixHost completed within 21.416542ms
	I0809 11:21:49.804663    2728 start.go:83] releasing machines lock for "multinode-305000", held for 21.744292ms
	W0809 11:21:49.804835    2728 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:21:49.813117    2728 out.go:177] 
	W0809 11:21:49.817223    2728 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:21:49.817269    2728 out.go:239] * 
	* 
	W0809 11:21:49.820050    2728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:21:49.827170    2728 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-305000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-305000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (32.793917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 node delete m03: exit status 89 (37.207875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-305000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-305000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr: exit status 7 (27.858041ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:50.004941    2742 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:50.005063    2742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.005065    2742 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:50.005068    2742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.005184    2742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:50.005294    2742 out.go:303] Setting JSON to false
	I0809 11:21:50.005305    2742 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:50.005381    2742 notify.go:220] Checking for updates...
	I0809 11:21:50.005477    2742 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:50.005481    2742 status.go:255] checking status of multinode-305000 ...
	I0809 11:21:50.005664    2742 status.go:330] multinode-305000 host status = "Stopped" (err=<nil>)
	I0809 11:21:50.005668    2742 status.go:343] host is not running, skipping remaining checks
	I0809 11:21:50.005670    2742 status.go:257] multinode-305000 status: &{Name:multinode-305000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.2215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status
E0809 11:21:50.107043    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status: exit status 7 (28.143125ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr: exit status 7 (28.4795ms)

                                                
                                                
-- stdout --
	multinode-305000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:50.149662    2750 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:50.149784    2750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.149787    2750 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:50.149789    2750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.149912    2750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:50.150025    2750 out.go:303] Setting JSON to false
	I0809 11:21:50.150036    2750 mustload.go:65] Loading cluster: multinode-305000
	I0809 11:21:50.150084    2750 notify.go:220] Checking for updates...
	I0809 11:21:50.150212    2750 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:50.150216    2750 status.go:255] checking status of multinode-305000 ...
	I0809 11:21:50.150402    2750 status.go:330] multinode-305000 host status = "Stopped" (err=<nil>)
	I0809 11:21:50.150406    2750 status.go:343] host is not running, skipping remaining checks
	I0809 11:21:50.150408    2750 status.go:257] multinode-305000 status: &{Name:multinode-305000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr": multinode-305000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-305000 status --alsologtostderr": multinode-305000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (28.123875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-305000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-305000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178462625s)

                                                
                                                
-- stdout --
	* [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-305000 in cluster multinode-305000
	* Restarting existing qemu2 VM for "multinode-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-305000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:21:50.205479    2754 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:21:50.205612    2754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.205614    2754 out.go:309] Setting ErrFile to fd 2...
	I0809 11:21:50.205617    2754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:21:50.205736    2754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:21:50.206675    2754 out.go:303] Setting JSON to false
	I0809 11:21:50.221875    2754 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1284,"bootTime":1691604026,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:21:50.221954    2754 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:21:50.225494    2754 out.go:177] * [multinode-305000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:21:50.232627    2754 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:21:50.236496    2754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:21:50.232687    2754 notify.go:220] Checking for updates...
	I0809 11:21:50.243617    2754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:21:50.246647    2754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:21:50.249607    2754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:21:50.252633    2754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:21:50.255854    2754 config.go:182] Loaded profile config "multinode-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:21:50.256100    2754 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:21:50.260580    2754 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:21:50.266459    2754 start.go:298] selected driver: qemu2
	I0809 11:21:50.266464    2754 start.go:901] validating driver "qemu2" against &{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:21:50.266512    2754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:21:50.268453    2754 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:21:50.268482    2754 cni.go:84] Creating CNI manager for ""
	I0809 11:21:50.268486    2754 cni.go:136] 1 nodes found, recommending kindnet
	I0809 11:21:50.268491    2754 start_flags.go:319] config:
	{Name:multinode-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-305000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:21:50.272968    2754 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:21:50.276653    2754 out.go:177] * Starting control plane node multinode-305000 in cluster multinode-305000
	I0809 11:21:50.284577    2754 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:21:50.284599    2754 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:21:50.284618    2754 cache.go:57] Caching tarball of preloaded images
	I0809 11:21:50.284668    2754 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:21:50.284674    2754 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:21:50.284729    2754 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/multinode-305000/config.json ...
	I0809 11:21:50.284979    2754 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:21:50.285004    2754 start.go:369] acquired machines lock for "multinode-305000" in 19.709µs
	I0809 11:21:50.285014    2754 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:21:50.285018    2754 fix.go:54] fixHost starting: 
	I0809 11:21:50.285133    2754 fix.go:102] recreateIfNeeded on multinode-305000: state=Stopped err=<nil>
	W0809 11:21:50.285141    2754 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:21:50.292534    2754 out.go:177] * Restarting existing qemu2 VM for "multinode-305000" ...
	I0809 11:21:50.296633    2754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:36:ab:5f:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:21:50.298641    2754 main.go:141] libmachine: STDOUT: 
	I0809 11:21:50.298658    2754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:21:50.298684    2754 fix.go:56] fixHost completed within 13.666375ms
	I0809 11:21:50.298689    2754 start.go:83] releasing machines lock for "multinode-305000", held for 13.681208ms
	W0809 11:21:50.298697    2754 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:21:50.298725    2754 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:21:50.298730    2754 start.go:687] Will try again in 5 seconds ...
	I0809 11:21:55.300685    2754 start.go:365] acquiring machines lock for multinode-305000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:21:55.301086    2754 start.go:369] acquired machines lock for "multinode-305000" in 319.583µs
	I0809 11:21:55.301230    2754 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:21:55.301263    2754 fix.go:54] fixHost starting: 
	I0809 11:21:55.301999    2754 fix.go:102] recreateIfNeeded on multinode-305000: state=Stopped err=<nil>
	W0809 11:21:55.302026    2754 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:21:55.310424    2754 out.go:177] * Restarting existing qemu2 VM for "multinode-305000" ...
	I0809 11:21:55.314688    2754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:78:36:ab:5f:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/multinode-305000/disk.qcow2
	I0809 11:21:55.323291    2754 main.go:141] libmachine: STDOUT: 
	I0809 11:21:55.323356    2754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:21:55.323435    2754 fix.go:56] fixHost completed within 22.174917ms
	I0809 11:21:55.323451    2754 start.go:83] releasing machines lock for "multinode-305000", held for 22.345125ms
	W0809 11:21:55.323637    2754 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-305000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:21:55.331426    2754 out.go:177] 
	W0809 11:21:55.334524    2754 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:21:55.334548    2754 out.go:239] * 
	* 
	W0809 11:21:55.337259    2754 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:21:55.345424    2754 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-305000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (67.058875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-305000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-305000-m01 --driver=qemu2 
E0809 11:22:00.349201    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-305000-m01 --driver=qemu2 : exit status 80 (10.0766955s)

                                                
                                                
-- stdout --
	* [multinode-305000-m01] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-305000-m01 in cluster multinode-305000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-305000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-305000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-305000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-305000-m02 --driver=qemu2 : exit status 80 (9.779783666s)

                                                
                                                
-- stdout --
	* [multinode-305000-m02] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-305000-m02 in cluster multinode-305000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-305000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-305000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-305000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-305000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-305000: exit status 89 (82.597458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-305000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-305000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-305000 -n multinode-305000: exit status 7 (31.223208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-305000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.14s)

                                                
                                    
x
+
TestPreload (9.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-851000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0809 11:22:20.829761    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-851000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.739938375s)

                                                
                                                
-- stdout --
	* [test-preload-851000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-851000 in cluster test-preload-851000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-851000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:22:15.718385    2812 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:22:15.718556    2812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:15.718559    2812 out.go:309] Setting ErrFile to fd 2...
	I0809 11:22:15.718561    2812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:22:15.718682    2812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:22:15.719903    2812 out.go:303] Setting JSON to false
	I0809 11:22:15.735517    2812 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1309,"bootTime":1691604026,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:22:15.735584    2812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:22:15.740986    2812 out.go:177] * [test-preload-851000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:22:15.748968    2812 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:22:15.751797    2812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:22:15.748987    2812 notify.go:220] Checking for updates...
	I0809 11:22:15.754919    2812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:22:15.757950    2812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:22:15.759313    2812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:22:15.761898    2812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:22:15.765250    2812 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:22:15.765290    2812 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:22:15.769776    2812 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:22:15.776865    2812 start.go:298] selected driver: qemu2
	I0809 11:22:15.776870    2812 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:22:15.776876    2812 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:22:15.778804    2812 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:22:15.781936    2812 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:22:15.785085    2812 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:22:15.785110    2812 cni.go:84] Creating CNI manager for ""
	I0809 11:22:15.785116    2812 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:22:15.785120    2812 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:22:15.785125    2812 start_flags.go:319] config:
	{Name:test-preload-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-851000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0809 11:22:15.789207    2812 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.796809    2812 out.go:177] * Starting control plane node test-preload-851000 in cluster test-preload-851000
	I0809 11:22:15.800950    2812 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0809 11:22:15.801066    2812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/test-preload-851000/config.json ...
	I0809 11:22:15.801052    2812 cache.go:107] acquiring lock: {Name:mkab3054a16289a4aefcfbb61ea6380445295ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801070    2812 cache.go:107] acquiring lock: {Name:mk5571f83bb891010d178b79abcbd78cdbcf47ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801074    2812 cache.go:107] acquiring lock: {Name:mk410148978c1fc16e077b95baba98a45aa860d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801085    2812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/test-preload-851000/config.json: {Name:mka4198d62095b49367a2dccbc509f933260bea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:22:15.801103    2812 cache.go:107] acquiring lock: {Name:mkaf96f273a4dc4f9bacf8bff911abce95b86b24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801234    2812 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:22:15.801250    2812 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0809 11:22:15.801255    2812 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0809 11:22:15.801225    2812 cache.go:107] acquiring lock: {Name:mkd9ebea9c639d2f9c482a4f59aac88893ae5ec0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801334    2812 cache.go:107] acquiring lock: {Name:mk0ab0f199b019e99177aecbc969673e948e3f6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801374    2812 start.go:365] acquiring machines lock for test-preload-851000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:22:15.801407    2812 start.go:369] acquired machines lock for "test-preload-851000" in 27.333µs
	I0809 11:22:15.801375    2812 cache.go:107] acquiring lock: {Name:mk81779fcd2f65567841c3680cd103a281ebc4f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801451    2812 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0809 11:22:15.801451    2812 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0809 11:22:15.801416    2812 start.go:93] Provisioning new machine with config: &{Name:test-preload-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-851000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:22:15.801438    2812 cache.go:107] acquiring lock: {Name:mke92048976a406f76ee913e07c4a7a594c2c5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:22:15.801509    2812 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:22:15.801540    2812 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0809 11:22:15.809917    2812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:22:15.801562    2812 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0809 11:22:15.801588    2812 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0809 11:22:15.816847    2812 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0809 11:22:15.816907    2812 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0809 11:22:15.817545    2812 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 11:22:15.817637    2812 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0809 11:22:15.820950    2812 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0809 11:22:15.820986    2812 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0809 11:22:15.821057    2812 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0809 11:22:15.821071    2812 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0809 11:22:15.825210    2812 start.go:159] libmachine.API.Create for "test-preload-851000" (driver="qemu2")
	I0809 11:22:15.825229    2812 client.go:168] LocalClient.Create starting
	I0809 11:22:15.825296    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:22:15.825320    2812 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:15.825329    2812 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:15.825366    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:22:15.825384    2812 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:15.825394    2812 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:15.825669    2812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:22:15.999210    2812 main.go:141] libmachine: Creating SSH key...
	I0809 11:22:16.039510    2812 main.go:141] libmachine: Creating Disk image...
	I0809 11:22:16.039561    2812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:22:16.039781    2812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:16.048397    2812 main.go:141] libmachine: STDOUT: 
	I0809 11:22:16.048413    2812 main.go:141] libmachine: STDERR: 
	I0809 11:22:16.048477    2812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2 +20000M
	I0809 11:22:16.056497    2812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:22:16.056515    2812 main.go:141] libmachine: STDERR: 
	I0809 11:22:16.056543    2812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:16.056551    2812 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:22:16.056590    2812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b8:84:72:e1:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:16.058443    2812 main.go:141] libmachine: STDOUT: 
	I0809 11:22:16.058456    2812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:22:16.058473    2812 client.go:171] LocalClient.Create took 233.24725ms
	I0809 11:22:16.573565    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0809 11:22:16.599699    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0809 11:22:16.960155    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0809 11:22:17.109068    2812 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0809 11:22:17.109101    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0809 11:22:17.127679    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0809 11:22:17.305828    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0809 11:22:17.305847    2812 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.504794875s
	I0809 11:22:17.305853    2812 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0809 11:22:17.348841    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0809 11:22:17.402252    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0809 11:22:17.402267    2812 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.601270708s
	I0809 11:22:17.402274    2812 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0809 11:22:17.571262    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0809 11:22:17.811114    2812 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0809 11:22:17.811176    2812 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0809 11:22:18.058739    2812 start.go:128] duration metric: createHost completed in 2.257271625s
	I0809 11:22:18.058797    2812 start.go:83] releasing machines lock for "test-preload-851000", held for 2.257458333s
	W0809 11:22:18.058852    2812 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:18.068862    2812 out.go:177] * Deleting "test-preload-851000" in qemu2 ...
	W0809 11:22:18.088400    2812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:18.088435    2812 start.go:687] Will try again in 5 seconds ...
	I0809 11:22:18.473432    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0809 11:22:18.473481    2812 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.672511708s
	I0809 11:22:18.473542    2812 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0809 11:22:19.364902    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0809 11:22:19.364948    2812 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.563753959s
	I0809 11:22:19.364973    2812 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0809 11:22:20.118877    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0809 11:22:20.118919    2812 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.31800975s
	I0809 11:22:20.118945    2812 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0809 11:22:21.421945    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0809 11:22:21.421991    2812 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.620822459s
	I0809 11:22:21.422053    2812 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0809 11:22:22.357064    2812 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0809 11:22:22.357136    2812 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.555991417s
	I0809 11:22:22.357166    2812 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0809 11:22:23.088555    2812 start.go:365] acquiring machines lock for test-preload-851000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:22:23.088944    2812 start.go:369] acquired machines lock for "test-preload-851000" in 309.333µs
	I0809 11:22:23.089051    2812 start.go:93] Provisioning new machine with config: &{Name:test-preload-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-851000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:22:23.089309    2812 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:22:23.094990    2812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:22:23.140478    2812 start.go:159] libmachine.API.Create for "test-preload-851000" (driver="qemu2")
	I0809 11:22:23.140514    2812 client.go:168] LocalClient.Create starting
	I0809 11:22:23.140702    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:22:23.140786    2812 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:23.140806    2812 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:23.140902    2812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:22:23.140945    2812 main.go:141] libmachine: Decoding PEM data...
	I0809 11:22:23.140963    2812 main.go:141] libmachine: Parsing certificate...
	I0809 11:22:23.141525    2812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:22:23.284804    2812 main.go:141] libmachine: Creating SSH key...
	I0809 11:22:23.370948    2812 main.go:141] libmachine: Creating Disk image...
	I0809 11:22:23.370954    2812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:22:23.371091    2812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:23.379890    2812 main.go:141] libmachine: STDOUT: 
	I0809 11:22:23.379903    2812 main.go:141] libmachine: STDERR: 
	I0809 11:22:23.379964    2812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2 +20000M
	I0809 11:22:23.387298    2812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:22:23.387315    2812 main.go:141] libmachine: STDERR: 
	I0809 11:22:23.387326    2812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:23.387331    2812 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:22:23.387381    2812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:15:02:ba:c6:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/test-preload-851000/disk.qcow2
	I0809 11:22:23.388892    2812 main.go:141] libmachine: STDOUT: 
	I0809 11:22:23.388906    2812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:22:23.388917    2812 client.go:171] LocalClient.Create took 248.405083ms
	I0809 11:22:25.389167    2812 start.go:128] duration metric: createHost completed in 2.299860709s
	I0809 11:22:25.389252    2812 start.go:83] releasing machines lock for "test-preload-851000", held for 2.300362333s
	W0809 11:22:25.389568    2812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:22:25.400206    2812 out.go:177] 
	W0809 11:22:25.404237    2812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:22:25.404279    2812 out.go:239] * 
	* 
	W0809 11:22:25.407239    2812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:22:25.416166    2812 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-851000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-08-09 11:22:25.433528 -0700 PDT m=+814.610930835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-851000 -n test-preload-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-851000 -n test-preload-851000: exit status 7 (63.888041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-851000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-851000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-851000
--- FAIL: TestPreload (9.90s)

                                                
                                    
x
+
TestScheduledStopUnix (9.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-383000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-383000 --memory=2048 --driver=qemu2 : exit status 80 (9.709483625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-383000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-383000 in cluster scheduled-stop-383000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-383000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-383000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-383000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-383000 in cluster scheduled-stop-383000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-383000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-383000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-08-09 11:22:35.30552 -0700 PDT m=+824.480987210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-383000 -n scheduled-stop-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-383000 -n scheduled-stop-383000: exit status 7 (66.88075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-383000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-383000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-383000
--- FAIL: TestScheduledStopUnix (9.87s)

                                                
                                    
x
+
TestSkaffold (11.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3859713618 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-819000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-819000 --memory=2600 --driver=qemu2 : exit status 80 (9.842829542s)

                                                
                                                
-- stdout --
	* [skaffold-819000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-819000 in cluster skaffold-819000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-819000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-819000 in cluster skaffold-819000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-08-09 11:22:47.239579 -0700 PDT m=+836.395297793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-819000 -n skaffold-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-819000 -n skaffold-819000: exit status 7 (61.556666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-819000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-819000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-819000
--- FAIL: TestSkaffold (11.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (128.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-09 11:25:35.343299 -0700 PDT m=+1004.485889251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-975000 -n running-upgrade-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-975000 -n running-upgrade-975000: exit status 85 (86.036125ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-975000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-975000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-975000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-975000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-975000\"")
helpers_test.go:175: Cleaning up "running-upgrade-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-975000
--- FAIL: TestRunningBinaryUpgrade (128.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.7769305s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-413000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-413000 in cluster kubernetes-upgrade-413000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:25:35.741491    3333 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:25:35.741714    3333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:25:35.741717    3333 out.go:309] Setting ErrFile to fd 2...
	I0809 11:25:35.741719    3333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:25:35.741844    3333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:25:35.742984    3333 out.go:303] Setting JSON to false
	I0809 11:25:35.758362    3333 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1509,"bootTime":1691604026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:25:35.758426    3333 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:25:35.763569    3333 out.go:177] * [kubernetes-upgrade-413000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:25:35.773633    3333 notify.go:220] Checking for updates...
	I0809 11:25:35.777622    3333 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:25:35.780666    3333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:25:35.783606    3333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:25:35.786638    3333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:25:35.789585    3333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:25:35.792532    3333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:25:35.795836    3333 config.go:182] Loaded profile config "cert-expiration-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:25:35.795897    3333 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:25:35.795938    3333 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:25:35.799590    3333 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:25:35.806585    3333 start.go:298] selected driver: qemu2
	I0809 11:25:35.806594    3333 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:25:35.806602    3333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:25:35.808532    3333 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:25:35.811646    3333 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:25:35.814641    3333 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:25:35.814657    3333 cni.go:84] Creating CNI manager for ""
	I0809 11:25:35.814663    3333 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:25:35.814666    3333 start_flags.go:319] config:
	{Name:kubernetes-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:25:35.818816    3333 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:25:35.825664    3333 out.go:177] * Starting control plane node kubernetes-upgrade-413000 in cluster kubernetes-upgrade-413000
	I0809 11:25:35.829584    3333 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:25:35.829614    3333 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:25:35.829635    3333 cache.go:57] Caching tarball of preloaded images
	I0809 11:25:35.829700    3333 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:25:35.829706    3333 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0809 11:25:35.829772    3333 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubernetes-upgrade-413000/config.json ...
	I0809 11:25:35.829786    3333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubernetes-upgrade-413000/config.json: {Name:mk6fb382909e7fdff5069c73ace5f0822facc983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:25:35.829995    3333 start.go:365] acquiring machines lock for kubernetes-upgrade-413000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:25:35.830035    3333 start.go:369] acquired machines lock for "kubernetes-upgrade-413000" in 30.25µs
	I0809 11:25:35.830046    3333 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:25:35.830079    3333 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:25:35.834677    3333 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:25:35.850383    3333 start.go:159] libmachine.API.Create for "kubernetes-upgrade-413000" (driver="qemu2")
	I0809 11:25:35.850403    3333 client.go:168] LocalClient.Create starting
	I0809 11:25:35.850461    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:25:35.850491    3333 main.go:141] libmachine: Decoding PEM data...
	I0809 11:25:35.850500    3333 main.go:141] libmachine: Parsing certificate...
	I0809 11:25:35.850543    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:25:35.850561    3333 main.go:141] libmachine: Decoding PEM data...
	I0809 11:25:35.850574    3333 main.go:141] libmachine: Parsing certificate...
	I0809 11:25:35.850881    3333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:25:35.964496    3333 main.go:141] libmachine: Creating SSH key...
	I0809 11:25:36.048575    3333 main.go:141] libmachine: Creating Disk image...
	I0809 11:25:36.048581    3333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:25:36.048719    3333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:36.057388    3333 main.go:141] libmachine: STDOUT: 
	I0809 11:25:36.057406    3333 main.go:141] libmachine: STDERR: 
	I0809 11:25:36.057459    3333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2 +20000M
	I0809 11:25:36.064776    3333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:25:36.064789    3333 main.go:141] libmachine: STDERR: 
	I0809 11:25:36.064809    3333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:36.064816    3333 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:25:36.064859    3333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b5:0d:ff:7b:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:36.066409    3333 main.go:141] libmachine: STDOUT: 
	I0809 11:25:36.066425    3333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:25:36.066442    3333 client.go:171] LocalClient.Create took 216.039833ms
	I0809 11:25:38.068592    3333 start.go:128] duration metric: createHost completed in 2.2385355s
	I0809 11:25:38.068693    3333 start.go:83] releasing machines lock for "kubernetes-upgrade-413000", held for 2.23870475s
	W0809 11:25:38.068762    3333 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:25:38.081294    3333 out.go:177] * Deleting "kubernetes-upgrade-413000" in qemu2 ...
	W0809 11:25:38.101875    3333 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:25:38.101910    3333 start.go:687] Will try again in 5 seconds ...
	I0809 11:25:43.104045    3333 start.go:365] acquiring machines lock for kubernetes-upgrade-413000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:25:43.104522    3333 start.go:369] acquired machines lock for "kubernetes-upgrade-413000" in 376.041µs
	I0809 11:25:43.104646    3333 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:25:43.104915    3333 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:25:43.110624    3333 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:25:43.159941    3333 start.go:159] libmachine.API.Create for "kubernetes-upgrade-413000" (driver="qemu2")
	I0809 11:25:43.160001    3333 client.go:168] LocalClient.Create starting
	I0809 11:25:43.160140    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:25:43.160197    3333 main.go:141] libmachine: Decoding PEM data...
	I0809 11:25:43.160222    3333 main.go:141] libmachine: Parsing certificate...
	I0809 11:25:43.160287    3333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:25:43.160329    3333 main.go:141] libmachine: Decoding PEM data...
	I0809 11:25:43.160345    3333 main.go:141] libmachine: Parsing certificate...
	I0809 11:25:43.161304    3333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:25:43.287588    3333 main.go:141] libmachine: Creating SSH key...
	I0809 11:25:43.430195    3333 main.go:141] libmachine: Creating Disk image...
	I0809 11:25:43.430207    3333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:25:43.430357    3333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:43.438815    3333 main.go:141] libmachine: STDOUT: 
	I0809 11:25:43.438830    3333 main.go:141] libmachine: STDERR: 
	I0809 11:25:43.438887    3333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2 +20000M
	I0809 11:25:43.446029    3333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:25:43.446052    3333 main.go:141] libmachine: STDERR: 
	I0809 11:25:43.446071    3333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:43.446080    3333 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:25:43.446128    3333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:1f:55:b3:16:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:43.447636    3333 main.go:141] libmachine: STDOUT: 
	I0809 11:25:43.447649    3333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:25:43.447663    3333 client.go:171] LocalClient.Create took 287.659625ms
	I0809 11:25:45.449819    3333 start.go:128] duration metric: createHost completed in 2.344897791s
	I0809 11:25:45.449912    3333 start.go:83] releasing machines lock for "kubernetes-upgrade-413000", held for 2.345421792s
	W0809 11:25:45.450365    3333 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:25:45.461043    3333 out.go:177] 
	W0809 11:25:45.465058    3333 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:25:45.465083    3333 out.go:239] * 
	* 
	W0809 11:25:45.467655    3333 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:25:45.477983    3333 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-413000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-413000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-413000 status --format={{.Host}}: exit status 7 (33.428875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182948083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-413000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-413000 in cluster kubernetes-upgrade-413000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:25:45.650857    3370 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:25:45.650966    3370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:25:45.650969    3370 out.go:309] Setting ErrFile to fd 2...
	I0809 11:25:45.650972    3370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:25:45.651082    3370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:25:45.652052    3370 out.go:303] Setting JSON to false
	I0809 11:25:45.667133    3370 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1519,"bootTime":1691604026,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:25:45.667223    3370 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:25:45.670805    3370 out.go:177] * [kubernetes-upgrade-413000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:25:45.677685    3370 notify.go:220] Checking for updates...
	I0809 11:25:45.685698    3370 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:25:45.689655    3370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:25:45.692627    3370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:25:45.696509    3370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:25:45.699602    3370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:25:45.702643    3370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:25:45.706445    3370 config.go:182] Loaded profile config "kubernetes-upgrade-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0809 11:25:45.707051    3370 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:25:45.711571    3370 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:25:45.718606    3370 start.go:298] selected driver: qemu2
	I0809 11:25:45.718610    3370 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:25:45.718660    3370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:25:45.720845    3370 cni.go:84] Creating CNI manager for ""
	I0809 11:25:45.720857    3370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:25:45.720867    3370 start_flags.go:319] config:
	{Name:kubernetes-upgrade-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:kubernetes-upgrade-41300
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:25:45.725304    3370 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:25:45.733594    3370 out.go:177] * Starting control plane node kubernetes-upgrade-413000 in cluster kubernetes-upgrade-413000
	I0809 11:25:45.737578    3370 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:25:45.737602    3370 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0809 11:25:45.737612    3370 cache.go:57] Caching tarball of preloaded images
	I0809 11:25:45.737676    3370 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:25:45.737682    3370 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on docker
	I0809 11:25:45.737744    3370 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubernetes-upgrade-413000/config.json ...
	I0809 11:25:45.737999    3370 start.go:365] acquiring machines lock for kubernetes-upgrade-413000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:25:45.738028    3370 start.go:369] acquired machines lock for "kubernetes-upgrade-413000" in 22.208µs
	I0809 11:25:45.738039    3370 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:25:45.738045    3370 fix.go:54] fixHost starting: 
	I0809 11:25:45.738178    3370 fix.go:102] recreateIfNeeded on kubernetes-upgrade-413000: state=Stopped err=<nil>
	W0809 11:25:45.738187    3370 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:25:45.745553    3370 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-413000" ...
	I0809 11:25:45.749686    3370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:1f:55:b3:16:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:45.751857    3370 main.go:141] libmachine: STDOUT: 
	I0809 11:25:45.751873    3370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:25:45.751907    3370 fix.go:56] fixHost completed within 13.862791ms
	I0809 11:25:45.751912    3370 start.go:83] releasing machines lock for "kubernetes-upgrade-413000", held for 13.879583ms
	W0809 11:25:45.751921    3370 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:25:45.751963    3370 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:25:45.751968    3370 start.go:687] Will try again in 5 seconds ...
	I0809 11:25:50.754028    3370 start.go:365] acquiring machines lock for kubernetes-upgrade-413000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:25:50.754398    3370 start.go:369] acquired machines lock for "kubernetes-upgrade-413000" in 292.75µs
	I0809 11:25:50.754515    3370 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:25:50.754534    3370 fix.go:54] fixHost starting: 
	I0809 11:25:50.755343    3370 fix.go:102] recreateIfNeeded on kubernetes-upgrade-413000: state=Stopped err=<nil>
	W0809 11:25:50.755375    3370 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:25:50.760035    3370 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-413000" ...
	I0809 11:25:50.767062    3370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:1f:55:b3:16:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubernetes-upgrade-413000/disk.qcow2
	I0809 11:25:50.775237    3370 main.go:141] libmachine: STDOUT: 
	I0809 11:25:50.775306    3370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:25:50.775413    3370 fix.go:56] fixHost completed within 20.855542ms
	I0809 11:25:50.775435    3370 start.go:83] releasing machines lock for "kubernetes-upgrade-413000", held for 21.016708ms
	W0809 11:25:50.775661    3370 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:25:50.782960    3370 out.go:177] 
	W0809 11:25:50.787020    3370 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:25:50.787044    3370 out.go:239] * 
	* 
	W0809 11:25:50.790087    3370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:25:50.795009    3370 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-413000 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-413000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-413000 version --output=json: exit status 1 (63.597ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-413000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-08-09 11:25:50.872316 -0700 PDT m=+1020.015293293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-413000 -n kubernetes-upgrade-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-413000 -n kubernetes-upgrade-413000: exit status 7 (32.213458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-413000
--- FAIL: TestKubernetesUpgrade (15.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.1 on darwin (arm64)
- MINIKUBE_LOCATION=17011
- KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1181323863/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.1 on darwin (arm64)
- MINIKUBE_LOCATION=17011
- KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3867227092/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (174.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (174.28s)

                                                
                                    
x
+
TestPause/serial/Start (9.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-443000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-443000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.78686675s)

                                                
                                                
-- stdout --
	* [pause-443000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-443000 in cluster pause-443000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-443000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-443000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-443000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-443000 -n pause-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-443000 -n pause-443000: exit status 7 (68.193834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-443000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 
E0809 11:26:39.882030    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 : exit status 80 (9.770101417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-803000 in cluster NoKubernetes-803000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-803000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-803000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000: exit status 7 (67.364375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 : exit status 80 (5.29267s)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-803000
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-803000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000: exit status 7 (65.524667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 : exit status 80 (5.234627583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-803000
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-803000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000: exit status 7 (68.533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 : exit status 80 (5.226975958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-803000
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-803000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-803000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-803000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-803000 -n NoKubernetes-803000: exit status 7 (71.67725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-803000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0809 11:27:07.590272    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/ingress-addon-legacy-050000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.897325042s)

                                                
                                                
-- stdout --
	* [auto-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-769000 in cluster auto-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:27:03.916367    3509 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:27:03.916474    3509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:03.916476    3509 out.go:309] Setting ErrFile to fd 2...
	I0809 11:27:03.916478    3509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:03.916602    3509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:27:03.917590    3509 out.go:303] Setting JSON to false
	I0809 11:27:03.932674    3509 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1597,"bootTime":1691604026,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:27:03.932746    3509 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:27:03.935565    3509 out.go:177] * [auto-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:27:03.944090    3509 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:27:03.944125    3509 notify.go:220] Checking for updates...
	I0809 11:27:03.948047    3509 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:27:03.951131    3509 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:27:03.954112    3509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:27:03.957138    3509 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:27:03.960098    3509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:27:03.963434    3509 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:27:03.963475    3509 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:27:03.967042    3509 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:27:03.974062    3509 start.go:298] selected driver: qemu2
	I0809 11:27:03.974067    3509 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:27:03.974072    3509 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:27:03.975908    3509 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:27:03.977345    3509 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:27:03.980173    3509 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:27:03.980196    3509 cni.go:84] Creating CNI manager for ""
	I0809 11:27:03.980205    3509 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:27:03.980209    3509 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:27:03.980214    3509 start_flags.go:319] config:
	{Name:auto-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:auto-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0809 11:27:03.984205    3509 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:27:03.987178    3509 out.go:177] * Starting control plane node auto-769000 in cluster auto-769000
	I0809 11:27:03.995068    3509 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:27:03.995090    3509 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:27:03.995102    3509 cache.go:57] Caching tarball of preloaded images
	I0809 11:27:03.995171    3509 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:27:03.995175    3509 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:27:03.995228    3509 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/auto-769000/config.json ...
	I0809 11:27:03.995240    3509 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/auto-769000/config.json: {Name:mke4cd8525499dce4aac9933a56d4d427a122953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:27:03.995426    3509 start.go:365] acquiring machines lock for auto-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:03.995453    3509 start.go:369] acquired machines lock for "auto-769000" in 22µs
	I0809 11:27:03.995462    3509 start.go:93] Provisioning new machine with config: &{Name:auto-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.4 ClusterName:auto-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:03.995501    3509 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:04.000086    3509 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:04.015374    3509 start.go:159] libmachine.API.Create for "auto-769000" (driver="qemu2")
	I0809 11:27:04.015404    3509 client.go:168] LocalClient.Create starting
	I0809 11:27:04.015453    3509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:04.015479    3509 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:04.015491    3509 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:04.015533    3509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:04.015551    3509 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:04.015559    3509 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:04.015846    3509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:04.130804    3509 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:04.245931    3509 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:04.245937    3509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:04.246084    3509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:04.254789    3509 main.go:141] libmachine: STDOUT: 
	I0809 11:27:04.254802    3509 main.go:141] libmachine: STDERR: 
	I0809 11:27:04.254857    3509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2 +20000M
	I0809 11:27:04.262031    3509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:04.262046    3509 main.go:141] libmachine: STDERR: 
	I0809 11:27:04.262064    3509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:04.262071    3509 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:04.262116    3509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:8a:eb:93:13:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:04.263670    3509 main.go:141] libmachine: STDOUT: 
	I0809 11:27:04.263683    3509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:04.263705    3509 client.go:171] LocalClient.Create took 248.302709ms
	I0809 11:27:06.265813    3509 start.go:128] duration metric: createHost completed in 2.270351334s
	I0809 11:27:06.266154    3509 start.go:83] releasing machines lock for "auto-769000", held for 2.270746541s
	W0809 11:27:06.266218    3509 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:06.274584    3509 out.go:177] * Deleting "auto-769000" in qemu2 ...
	W0809 11:27:06.299397    3509 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:06.299427    3509 start.go:687] Will try again in 5 seconds ...
	I0809 11:27:11.301549    3509 start.go:365] acquiring machines lock for auto-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:11.301983    3509 start.go:369] acquired machines lock for "auto-769000" in 329.583µs
	I0809 11:27:11.302093    3509 start.go:93] Provisioning new machine with config: &{Name:auto-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.4 ClusterName:auto-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:11.302344    3509 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:11.312090    3509 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:11.358580    3509 start.go:159] libmachine.API.Create for "auto-769000" (driver="qemu2")
	I0809 11:27:11.358631    3509 client.go:168] LocalClient.Create starting
	I0809 11:27:11.358770    3509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:11.358821    3509 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:11.358836    3509 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:11.358909    3509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:11.358944    3509 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:11.358956    3509 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:11.359442    3509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:11.487862    3509 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:11.727770    3509 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:11.727781    3509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:11.727927    3509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:11.736450    3509 main.go:141] libmachine: STDOUT: 
	I0809 11:27:11.736469    3509 main.go:141] libmachine: STDERR: 
	I0809 11:27:11.736528    3509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2 +20000M
	I0809 11:27:11.743742    3509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:11.743800    3509 main.go:141] libmachine: STDERR: 
	I0809 11:27:11.743817    3509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:11.743823    3509 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:11.743860    3509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:91:4a:2f:ff:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/auto-769000/disk.qcow2
	I0809 11:27:11.745372    3509 main.go:141] libmachine: STDOUT: 
	I0809 11:27:11.745385    3509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:11.745418    3509 client.go:171] LocalClient.Create took 386.789042ms
	I0809 11:27:13.747524    3509 start.go:128] duration metric: createHost completed in 2.445217208s
	I0809 11:27:13.747629    3509 start.go:83] releasing machines lock for "auto-769000", held for 2.445676875s
	W0809 11:27:13.748057    3509 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:13.756658    3509 out.go:177] 
	W0809 11:27:13.761791    3509 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:27:13.761818    3509 out.go:239] * 
	* 
	W0809 11:27:13.764429    3509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:27:13.772697    3509 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.8719845s)

                                                
                                                
-- stdout --
	* [kindnet-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-769000 in cluster kindnet-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:27:15.845169    3623 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:27:15.845279    3623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:15.845284    3623 out.go:309] Setting ErrFile to fd 2...
	I0809 11:27:15.845286    3623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:15.845395    3623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:27:15.846412    3623 out.go:303] Setting JSON to false
	I0809 11:27:15.861498    3623 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1609,"bootTime":1691604026,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:27:15.861588    3623 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:27:15.865808    3623 out.go:177] * [kindnet-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:27:15.872892    3623 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:27:15.872952    3623 notify.go:220] Checking for updates...
	I0809 11:27:15.876760    3623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:27:15.880816    3623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:27:15.882276    3623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:27:15.885804    3623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:27:15.888787    3623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:27:15.892147    3623 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:27:15.892193    3623 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:27:15.896736    3623 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:27:15.903805    3623 start.go:298] selected driver: qemu2
	I0809 11:27:15.903811    3623 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:27:15.903818    3623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:27:15.905789    3623 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:27:15.909736    3623 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:27:15.912949    3623 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:27:15.912988    3623 cni.go:84] Creating CNI manager for "kindnet"
	I0809 11:27:15.912993    3623 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 11:27:15.913003    3623 start_flags.go:319] config:
	{Name:kindnet-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0809 11:27:15.917460    3623 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:27:15.925750    3623 out.go:177] * Starting control plane node kindnet-769000 in cluster kindnet-769000
	I0809 11:27:15.929839    3623 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:27:15.929873    3623 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:27:15.929885    3623 cache.go:57] Caching tarball of preloaded images
	I0809 11:27:15.929951    3623 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:27:15.929985    3623 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:27:15.930076    3623 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kindnet-769000/config.json ...
	I0809 11:27:15.930089    3623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kindnet-769000/config.json: {Name:mk5fc6e167badf343109f3d844401dee12bc8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:27:15.930286    3623 start.go:365] acquiring machines lock for kindnet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:15.930317    3623 start.go:369] acquired machines lock for "kindnet-769000" in 25.25µs
	I0809 11:27:15.930339    3623 start.go:93] Provisioning new machine with config: &{Name:kindnet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:15.930372    3623 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:15.933844    3623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:15.950077    3623 start.go:159] libmachine.API.Create for "kindnet-769000" (driver="qemu2")
	I0809 11:27:15.950096    3623 client.go:168] LocalClient.Create starting
	I0809 11:27:15.950149    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:15.950182    3623 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:15.950194    3623 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:15.950235    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:15.950254    3623 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:15.950269    3623 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:15.950619    3623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:16.063873    3623 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:16.252315    3623 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:16.252325    3623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:16.252499    3623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:16.261137    3623 main.go:141] libmachine: STDOUT: 
	I0809 11:27:16.261160    3623 main.go:141] libmachine: STDERR: 
	I0809 11:27:16.261236    3623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2 +20000M
	I0809 11:27:16.268612    3623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:16.268625    3623 main.go:141] libmachine: STDERR: 
	I0809 11:27:16.268649    3623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:16.268656    3623 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:16.268688    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:36:2f:a8:df:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:16.270155    3623 main.go:141] libmachine: STDOUT: 
	I0809 11:27:16.270167    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:16.270186    3623 client.go:171] LocalClient.Create took 320.093375ms
	I0809 11:27:18.272415    3623 start.go:128] duration metric: createHost completed in 2.342074208s
	I0809 11:27:18.272501    3623 start.go:83] releasing machines lock for "kindnet-769000", held for 2.34222975s
	W0809 11:27:18.272613    3623 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:18.282816    3623 out.go:177] * Deleting "kindnet-769000" in qemu2 ...
	W0809 11:27:18.303517    3623 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:18.303549    3623 start.go:687] Will try again in 5 seconds ...
	I0809 11:27:23.305771    3623 start.go:365] acquiring machines lock for kindnet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:23.306205    3623 start.go:369] acquired machines lock for "kindnet-769000" in 332.791µs
	I0809 11:27:23.306331    3623 start.go:93] Provisioning new machine with config: &{Name:kindnet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:23.306656    3623 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:23.316067    3623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:23.363989    3623 start.go:159] libmachine.API.Create for "kindnet-769000" (driver="qemu2")
	I0809 11:27:23.364045    3623 client.go:168] LocalClient.Create starting
	I0809 11:27:23.364178    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:23.364249    3623 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:23.364273    3623 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:23.364353    3623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:23.364393    3623 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:23.364411    3623 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:23.364961    3623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:23.494210    3623 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:23.629103    3623 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:23.629111    3623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:23.629275    3623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:23.637705    3623 main.go:141] libmachine: STDOUT: 
	I0809 11:27:23.637724    3623 main.go:141] libmachine: STDERR: 
	I0809 11:27:23.637783    3623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2 +20000M
	I0809 11:27:23.644997    3623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:23.645019    3623 main.go:141] libmachine: STDERR: 
	I0809 11:27:23.645034    3623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:23.645042    3623 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:23.645079    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:dd:65:cf:27:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kindnet-769000/disk.qcow2
	I0809 11:27:23.646646    3623 main.go:141] libmachine: STDOUT: 
	I0809 11:27:23.646660    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:23.646673    3623 client.go:171] LocalClient.Create took 282.630709ms
	I0809 11:27:25.648809    3623 start.go:128] duration metric: createHost completed in 2.342156334s
	I0809 11:27:25.648875    3623 start.go:83] releasing machines lock for "kindnet-769000", held for 2.342700875s
	W0809 11:27:25.649249    3623 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:25.658877    3623 out.go:177] 
	W0809 11:27:25.662947    3623 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:27:25.662970    3623 out.go:239] * 
	* 
	W0809 11:27:25.665857    3623 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:27:25.675801    3623 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.878654667s)

                                                
                                                
-- stdout --
	* [calico-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-769000 in cluster calico-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:27:27.852158    3739 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:27:27.852265    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:27.852268    3739 out.go:309] Setting ErrFile to fd 2...
	I0809 11:27:27.852270    3739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:27.852385    3739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:27:27.853462    3739 out.go:303] Setting JSON to false
	I0809 11:27:27.868480    3739 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1621,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:27:27.868551    3739 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:27:27.872979    3739 out.go:177] * [calico-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:27:27.880990    3739 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:27:27.884928    3739 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:27:27.881075    3739 notify.go:220] Checking for updates...
	I0809 11:27:27.890880    3739 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:27:27.893905    3739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:27:27.897041    3739 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:27:27.899878    3739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:27:27.903170    3739 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:27:27.903219    3739 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:27:27.906880    3739 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:27:27.913959    3739 start.go:298] selected driver: qemu2
	I0809 11:27:27.913965    3739 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:27:27.913970    3739 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:27:27.915956    3739 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:27:27.918914    3739 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:27:27.921986    3739 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:27:27.922007    3739 cni.go:84] Creating CNI manager for "calico"
	I0809 11:27:27.922011    3739 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0809 11:27:27.922018    3739 start_flags.go:319] config:
	{Name:calico-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:calico-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0809 11:27:27.926146    3739 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:27:27.932900    3739 out.go:177] * Starting control plane node calico-769000 in cluster calico-769000
	I0809 11:27:27.936880    3739 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:27:27.936899    3739 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:27:27.936912    3739 cache.go:57] Caching tarball of preloaded images
	I0809 11:27:27.936997    3739 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:27:27.937003    3739 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:27:27.937066    3739 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/calico-769000/config.json ...
	I0809 11:27:27.937078    3739 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/calico-769000/config.json: {Name:mk365cf981fefe86439494b993384837ac6b37f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:27:27.937275    3739 start.go:365] acquiring machines lock for calico-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:27.937313    3739 start.go:369] acquired machines lock for "calico-769000" in 32.667µs
	I0809 11:27:27.937322    3739 start.go:93] Provisioning new machine with config: &{Name:calico-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:calico-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:27.937360    3739 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:27.944870    3739 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:27.960916    3739 start.go:159] libmachine.API.Create for "calico-769000" (driver="qemu2")
	I0809 11:27:27.960944    3739 client.go:168] LocalClient.Create starting
	I0809 11:27:27.960998    3739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:27.961023    3739 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:27.961034    3739 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:27.961076    3739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:27.961103    3739 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:27.961114    3739 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:27.961460    3739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:28.093423    3739 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:28.266932    3739 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:28.266941    3739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:28.267123    3739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:28.276113    3739 main.go:141] libmachine: STDOUT: 
	I0809 11:27:28.276130    3739 main.go:141] libmachine: STDERR: 
	I0809 11:27:28.276192    3739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2 +20000M
	I0809 11:27:28.283405    3739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:28.283417    3739 main.go:141] libmachine: STDERR: 
	I0809 11:27:28.283440    3739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:28.283447    3739 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:28.283483    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:f3:92:41:a6:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:28.284931    3739 main.go:141] libmachine: STDOUT: 
	I0809 11:27:28.284944    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:28.284962    3739 client.go:171] LocalClient.Create took 324.018792ms
	I0809 11:27:30.287085    3739 start.go:128] duration metric: createHost completed in 2.349765542s
	I0809 11:27:30.287388    3739 start.go:83] releasing machines lock for "calico-769000", held for 2.350125083s
	W0809 11:27:30.287451    3739 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:30.298614    3739 out.go:177] * Deleting "calico-769000" in qemu2 ...
	W0809 11:27:30.319800    3739 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:30.319822    3739 start.go:687] Will try again in 5 seconds ...
	I0809 11:27:35.321995    3739 start.go:365] acquiring machines lock for calico-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:35.322459    3739 start.go:369] acquired machines lock for "calico-769000" in 319.25µs
	I0809 11:27:35.322570    3739 start.go:93] Provisioning new machine with config: &{Name:calico-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:calico-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:35.322883    3739 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:35.332538    3739 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:35.380999    3739 start.go:159] libmachine.API.Create for "calico-769000" (driver="qemu2")
	I0809 11:27:35.381049    3739 client.go:168] LocalClient.Create starting
	I0809 11:27:35.381171    3739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:35.381235    3739 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:35.381253    3739 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:35.381341    3739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:35.381381    3739 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:35.381399    3739 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:35.381965    3739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:35.508677    3739 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:35.642805    3739 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:35.642819    3739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:35.642974    3739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:35.651576    3739 main.go:141] libmachine: STDOUT: 
	I0809 11:27:35.651596    3739 main.go:141] libmachine: STDERR: 
	I0809 11:27:35.651650    3739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2 +20000M
	I0809 11:27:35.658858    3739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:35.658871    3739 main.go:141] libmachine: STDERR: 
	I0809 11:27:35.658891    3739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:35.658898    3739 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:35.658974    3739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f7:47:ed:74:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/calico-769000/disk.qcow2
	I0809 11:27:35.660441    3739 main.go:141] libmachine: STDOUT: 
	I0809 11:27:35.660452    3739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:35.660465    3739 client.go:171] LocalClient.Create took 279.414917ms
	I0809 11:27:37.662629    3739 start.go:128] duration metric: createHost completed in 2.339763416s
	I0809 11:27:37.662736    3739 start.go:83] releasing machines lock for "calico-769000", held for 2.340303083s
	W0809 11:27:37.663193    3739 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:37.673729    3739 out.go:177] 
	W0809 11:27:37.677809    3739 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:27:37.677832    3739 out.go:239] * 
	* 
	W0809 11:27:37.680728    3739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:27:37.690804    3739 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.794446291s)

                                                
                                                
-- stdout --
	* [custom-flannel-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-769000 in cluster custom-flannel-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:27:40.027898    3862 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:27:40.028017    3862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:40.028020    3862 out.go:309] Setting ErrFile to fd 2...
	I0809 11:27:40.028023    3862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:40.028149    3862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:27:40.029137    3862 out.go:303] Setting JSON to false
	I0809 11:27:40.044180    3862 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1634,"bootTime":1691604026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:27:40.044231    3862 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:27:40.049731    3862 out.go:177] * [custom-flannel-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:27:40.057650    3862 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:27:40.061690    3862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:27:40.057707    3862 notify.go:220] Checking for updates...
	I0809 11:27:40.067659    3862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:27:40.070670    3862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:27:40.071992    3862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:27:40.074660    3862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:27:40.078025    3862 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:27:40.078066    3862 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:27:40.082497    3862 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:27:40.089658    3862 start.go:298] selected driver: qemu2
	I0809 11:27:40.089665    3862 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:27:40.089672    3862 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:27:40.091757    3862 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:27:40.095542    3862 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:27:40.098766    3862 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:27:40.098793    3862 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0809 11:27:40.098812    3862 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0809 11:27:40.098817    3862 start_flags.go:319] config:
	{Name:custom-flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:custom-flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:27:40.103094    3862 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:27:40.111643    3862 out.go:177] * Starting control plane node custom-flannel-769000 in cluster custom-flannel-769000
	I0809 11:27:40.115713    3862 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:27:40.115736    3862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:27:40.115750    3862 cache.go:57] Caching tarball of preloaded images
	I0809 11:27:40.115818    3862 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:27:40.115825    3862 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:27:40.115900    3862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/custom-flannel-769000/config.json ...
	I0809 11:27:40.115921    3862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/custom-flannel-769000/config.json: {Name:mk9ba1248923cca343f3e43f1e3c97ce84e2df83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:27:40.116112    3862 start.go:365] acquiring machines lock for custom-flannel-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:40.116142    3862 start.go:369] acquired machines lock for "custom-flannel-769000" in 23.958µs
	I0809 11:27:40.116151    3862 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.4 ClusterName:custom-flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:40.116182    3862 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:40.120639    3862 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:40.135969    3862 start.go:159] libmachine.API.Create for "custom-flannel-769000" (driver="qemu2")
	I0809 11:27:40.136000    3862 client.go:168] LocalClient.Create starting
	I0809 11:27:40.136051    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:40.136077    3862 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:40.136091    3862 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:40.136125    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:40.136148    3862 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:40.136155    3862 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:40.136464    3862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:40.250460    3862 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:40.285469    3862 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:40.285475    3862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:40.285615    3862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:40.294127    3862 main.go:141] libmachine: STDOUT: 
	I0809 11:27:40.294143    3862 main.go:141] libmachine: STDERR: 
	I0809 11:27:40.294205    3862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2 +20000M
	I0809 11:27:40.301390    3862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:40.301400    3862 main.go:141] libmachine: STDERR: 
	I0809 11:27:40.301418    3862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:40.301423    3862 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:40.301465    3862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:a5:f6:58:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:40.302967    3862 main.go:141] libmachine: STDOUT: 
	I0809 11:27:40.302979    3862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:40.302995    3862 client.go:171] LocalClient.Create took 166.994416ms
	I0809 11:27:42.305161    3862 start.go:128] duration metric: createHost completed in 2.189016125s
	I0809 11:27:42.305216    3862 start.go:83] releasing machines lock for "custom-flannel-769000", held for 2.189116417s
	W0809 11:27:42.305304    3862 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:42.312708    3862 out.go:177] * Deleting "custom-flannel-769000" in qemu2 ...
	W0809 11:27:42.336341    3862 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:42.336367    3862 start.go:687] Will try again in 5 seconds ...
	I0809 11:27:47.338534    3862 start.go:365] acquiring machines lock for custom-flannel-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:47.338983    3862 start.go:369] acquired machines lock for "custom-flannel-769000" in 341.125µs
	I0809 11:27:47.339106    3862 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.4 ClusterName:custom-flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:47.339455    3862 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:47.348107    3862 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:47.394447    3862 start.go:159] libmachine.API.Create for "custom-flannel-769000" (driver="qemu2")
	I0809 11:27:47.394492    3862 client.go:168] LocalClient.Create starting
	I0809 11:27:47.394594    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:47.394657    3862 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:47.394680    3862 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:47.394746    3862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:47.394786    3862 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:47.394799    3862 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:47.395375    3862 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:47.525161    3862 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:47.732911    3862 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:47.732917    3862 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:47.733092    3862 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:47.742353    3862 main.go:141] libmachine: STDOUT: 
	I0809 11:27:47.742366    3862 main.go:141] libmachine: STDERR: 
	I0809 11:27:47.742455    3862 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2 +20000M
	I0809 11:27:47.749723    3862 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:47.749735    3862 main.go:141] libmachine: STDERR: 
	I0809 11:27:47.749748    3862 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:47.749754    3862 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:47.749790    3862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ba:fd:eb:9b:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/custom-flannel-769000/disk.qcow2
	I0809 11:27:47.751321    3862 main.go:141] libmachine: STDOUT: 
	I0809 11:27:47.751333    3862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:47.751345    3862 client.go:171] LocalClient.Create took 356.85725ms
	I0809 11:27:49.753488    3862 start.go:128] duration metric: createHost completed in 2.414058625s
	I0809 11:27:49.753594    3862 start.go:83] releasing machines lock for "custom-flannel-769000", held for 2.414645542s
	W0809 11:27:49.754123    3862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:49.765026    3862 out.go:177] 
	W0809 11:27:49.769194    3862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:27:49.769227    3862 out.go:239] * 
	* 
	W0809 11:27:49.771640    3862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:27:49.781973    3862 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.801308125s)

                                                
                                                
-- stdout --
	* [false-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-769000 in cluster false-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:27:52.109234    3982 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:27:52.109354    3982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:52.109357    3982 out.go:309] Setting ErrFile to fd 2...
	I0809 11:27:52.109359    3982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:27:52.109466    3982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:27:52.110478    3982 out.go:303] Setting JSON to false
	I0809 11:27:52.125488    3982 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1646,"bootTime":1691604026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:27:52.125580    3982 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:27:52.131271    3982 out.go:177] * [false-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:27:52.139365    3982 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:27:52.139429    3982 notify.go:220] Checking for updates...
	I0809 11:27:52.143276    3982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:27:52.146360    3982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:27:52.149387    3982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:27:52.152363    3982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:27:52.155347    3982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:27:52.158669    3982 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:27:52.158718    3982 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:27:52.162231    3982 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:27:52.169318    3982 start.go:298] selected driver: qemu2
	I0809 11:27:52.169323    3982 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:27:52.169333    3982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:27:52.171146    3982 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:27:52.174250    3982 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:27:52.177499    3982 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:27:52.177529    3982 cni.go:84] Creating CNI manager for "false"
	I0809 11:27:52.177536    3982 start_flags.go:319] config:
	{Name:false-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:false-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0}
	I0809 11:27:52.182891    3982 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:27:52.188276    3982 out.go:177] * Starting control plane node false-769000 in cluster false-769000
	I0809 11:27:52.192290    3982 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:27:52.192305    3982 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:27:52.192315    3982 cache.go:57] Caching tarball of preloaded images
	I0809 11:27:52.192367    3982 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:27:52.192372    3982 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:27:52.192430    3982 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/false-769000/config.json ...
	I0809 11:27:52.192443    3982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/false-769000/config.json: {Name:mkeca57a57db8a2c916586deb624c1e23655f1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:27:52.192662    3982 start.go:365] acquiring machines lock for false-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:52.192691    3982 start.go:369] acquired machines lock for "false-769000" in 24.25µs
	I0809 11:27:52.192701    3982 start.go:93] Provisioning new machine with config: &{Name:false-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.4 ClusterName:false-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:52.192739    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:52.200335    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:52.216511    3982 start.go:159] libmachine.API.Create for "false-769000" (driver="qemu2")
	I0809 11:27:52.216530    3982 client.go:168] LocalClient.Create starting
	I0809 11:27:52.216579    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:52.216604    3982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:52.216616    3982 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:52.216659    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:52.216678    3982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:52.216686    3982 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:52.217022    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:52.332469    3982 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:52.504622    3982 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:52.504629    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:52.504771    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:52.513515    3982 main.go:141] libmachine: STDOUT: 
	I0809 11:27:52.513534    3982 main.go:141] libmachine: STDERR: 
	I0809 11:27:52.513585    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2 +20000M
	I0809 11:27:52.520688    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:52.520710    3982 main.go:141] libmachine: STDERR: 
	I0809 11:27:52.520726    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:52.520731    3982 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:52.520763    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d0:60:98:8c:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:52.522281    3982 main.go:141] libmachine: STDOUT: 
	I0809 11:27:52.522295    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:52.522312    3982 client.go:171] LocalClient.Create took 305.784875ms
	I0809 11:27:54.524436    3982 start.go:128] duration metric: createHost completed in 2.331734125s
	I0809 11:27:54.524495    3982 start.go:83] releasing machines lock for "false-769000", held for 2.331851792s
	W0809 11:27:54.524584    3982 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:54.531808    3982 out.go:177] * Deleting "false-769000" in qemu2 ...
	W0809 11:27:54.551508    3982 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:27:54.551529    3982 start.go:687] Will try again in 5 seconds ...
	I0809 11:27:59.553664    3982 start.go:365] acquiring machines lock for false-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:27:59.554149    3982 start.go:369] acquired machines lock for "false-769000" in 360.958µs
	I0809 11:27:59.554271    3982 start.go:93] Provisioning new machine with config: &{Name:false-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.4 ClusterName:false-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:27:59.554531    3982 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:27:59.560014    3982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:27:59.607873    3982 start.go:159] libmachine.API.Create for "false-769000" (driver="qemu2")
	I0809 11:27:59.607916    3982 client.go:168] LocalClient.Create starting
	I0809 11:27:59.608038    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:27:59.608093    3982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:59.608108    3982 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:59.608194    3982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:27:59.608231    3982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:27:59.608242    3982 main.go:141] libmachine: Parsing certificate...
	I0809 11:27:59.608807    3982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:27:59.734546    3982 main.go:141] libmachine: Creating SSH key...
	I0809 11:27:59.823307    3982 main.go:141] libmachine: Creating Disk image...
	I0809 11:27:59.823312    3982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:27:59.823467    3982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:59.831925    3982 main.go:141] libmachine: STDOUT: 
	I0809 11:27:59.831940    3982 main.go:141] libmachine: STDERR: 
	I0809 11:27:59.831996    3982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2 +20000M
	I0809 11:27:59.839259    3982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:27:59.839273    3982 main.go:141] libmachine: STDERR: 
	I0809 11:27:59.839285    3982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:59.839295    3982 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:27:59.839324    3982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7a:43:55:44:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/false-769000/disk.qcow2
	I0809 11:27:59.840716    3982 main.go:141] libmachine: STDOUT: 
	I0809 11:27:59.840731    3982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:27:59.840743    3982 client.go:171] LocalClient.Create took 232.82825ms
	I0809 11:28:01.842848    3982 start.go:128] duration metric: createHost completed in 2.288348417s
	I0809 11:28:01.842914    3982 start.go:83] releasing machines lock for "false-769000", held for 2.288797333s
	W0809 11:28:01.843342    3982 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:01.852953    3982 out.go:177] 
	W0809 11:28:01.857050    3982 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:01.857097    3982 out.go:239] * 
	* 
	W0809 11:28:01.859526    3982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:01.869994    3982 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.744119375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-769000 in cluster enable-default-cni-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:03.996577    4095 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:03.996674    4095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:03.996677    4095 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:03.996679    4095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:03.996795    4095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:03.997842    4095 out.go:303] Setting JSON to false
	I0809 11:28:04.012915    4095 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1658,"bootTime":1691604026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:04.012982    4095 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:04.018459    4095 out.go:177] * [enable-default-cni-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:04.026483    4095 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:04.026551    4095 notify.go:220] Checking for updates...
	I0809 11:28:04.030433    4095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:04.033519    4095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:04.036483    4095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:04.039402    4095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:04.042449    4095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:04.045815    4095 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:04.045860    4095 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:04.053416    4095 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:04.060464    4095 start.go:298] selected driver: qemu2
	I0809 11:28:04.060469    4095 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:04.060477    4095 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:04.062570    4095 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:04.065387    4095 out.go:177] * Automatically selected the socket_vmnet network
	E0809 11:28:04.068478    4095 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0809 11:28:04.068486    4095 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:04.068504    4095 cni.go:84] Creating CNI manager for "bridge"
	I0809 11:28:04.068508    4095 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:28:04.068515    4095 start_flags.go:319] config:
	{Name:enable-default-cni-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:enable-default-cni-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:28:04.072746    4095 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:04.079435    4095 out.go:177] * Starting control plane node enable-default-cni-769000 in cluster enable-default-cni-769000
	I0809 11:28:04.083411    4095 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:28:04.083433    4095 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:28:04.083440    4095 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:04.083498    4095 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:04.083504    4095 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:28:04.083566    4095 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/enable-default-cni-769000/config.json ...
	I0809 11:28:04.083584    4095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/enable-default-cni-769000/config.json: {Name:mka577f9870974517b47cba5319a0cec92aaef0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:04.083793    4095 start.go:365] acquiring machines lock for enable-default-cni-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:04.083832    4095 start.go:369] acquired machines lock for "enable-default-cni-769000" in 26.083µs
	I0809 11:28:04.083843    4095 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:enable-default-cni-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:04.083875    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:04.092422    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:04.108917    4095 start.go:159] libmachine.API.Create for "enable-default-cni-769000" (driver="qemu2")
	I0809 11:28:04.108949    4095 client.go:168] LocalClient.Create starting
	I0809 11:28:04.109014    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:04.109050    4095 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:04.109060    4095 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:04.109101    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:04.109123    4095 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:04.109133    4095 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:04.109698    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:04.225186    4095 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:04.305645    4095 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:04.305651    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:04.305789    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:04.314553    4095 main.go:141] libmachine: STDOUT: 
	I0809 11:28:04.314574    4095 main.go:141] libmachine: STDERR: 
	I0809 11:28:04.314637    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2 +20000M
	I0809 11:28:04.321798    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:04.321811    4095 main.go:141] libmachine: STDERR: 
	I0809 11:28:04.321838    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:04.321847    4095 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:04.321892    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:14:fa:fc:c0:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:04.323398    4095 main.go:141] libmachine: STDOUT: 
	I0809 11:28:04.323410    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:04.323430    4095 client.go:171] LocalClient.Create took 214.48ms
	I0809 11:28:06.325571    4095 start.go:128] duration metric: createHost completed in 2.241734958s
	I0809 11:28:06.325626    4095 start.go:83] releasing machines lock for "enable-default-cni-769000", held for 2.241840834s
	W0809 11:28:06.325690    4095 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:06.333947    4095 out.go:177] * Deleting "enable-default-cni-769000" in qemu2 ...
	W0809 11:28:06.356971    4095 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:06.357007    4095 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:11.357553    4095 start.go:365] acquiring machines lock for enable-default-cni-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:11.358074    4095 start.go:369] acquired machines lock for "enable-default-cni-769000" in 404.041µs
	I0809 11:28:11.358203    4095 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:enable-default-cni-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:11.358532    4095 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:11.368212    4095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:11.415642    4095 start.go:159] libmachine.API.Create for "enable-default-cni-769000" (driver="qemu2")
	I0809 11:28:11.415687    4095 client.go:168] LocalClient.Create starting
	I0809 11:28:11.415831    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:11.415893    4095 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:11.415917    4095 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:11.415999    4095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:11.416041    4095 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:11.416055    4095 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:11.416594    4095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:11.544644    4095 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:11.656127    4095 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:11.656135    4095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:11.656267    4095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:11.664723    4095 main.go:141] libmachine: STDOUT: 
	I0809 11:28:11.664735    4095 main.go:141] libmachine: STDERR: 
	I0809 11:28:11.664794    4095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2 +20000M
	I0809 11:28:11.672176    4095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:11.672187    4095 main.go:141] libmachine: STDERR: 
	I0809 11:28:11.672200    4095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:11.672203    4095 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:11.672242    4095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6c:69:86:12:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/enable-default-cni-769000/disk.qcow2
	I0809 11:28:11.673764    4095 main.go:141] libmachine: STDOUT: 
	I0809 11:28:11.673774    4095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:11.673786    4095 client.go:171] LocalClient.Create took 258.101ms
	I0809 11:28:13.675988    4095 start.go:128] duration metric: createHost completed in 2.317491s
	I0809 11:28:13.676033    4095 start.go:83] releasing machines lock for "enable-default-cni-769000", held for 2.31799025s
	W0809 11:28:13.676381    4095 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:13.684899    4095 out.go:177] 
	W0809 11:28:13.689073    4095 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:13.689125    4095 out.go:239] * 
	* 
	W0809 11:28:13.691677    4095 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:13.700973    4095 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.843160959s)

                                                
                                                
-- stdout --
	* [flannel-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-769000 in cluster flannel-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:15.836299    4206 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:15.836416    4206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:15.836419    4206 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:15.836421    4206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:15.836530    4206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:15.837543    4206 out.go:303] Setting JSON to false
	I0809 11:28:15.852611    4206 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1669,"bootTime":1691604026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:15.852700    4206 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:15.858241    4206 out.go:177] * [flannel-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:15.865210    4206 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:15.865278    4206 notify.go:220] Checking for updates...
	I0809 11:28:15.869231    4206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:15.870616    4206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:15.873226    4206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:15.876233    4206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:15.879262    4206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:15.882552    4206 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:15.882593    4206 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:15.887214    4206 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:15.894189    4206 start.go:298] selected driver: qemu2
	I0809 11:28:15.894194    4206 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:15.894200    4206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:15.896084    4206 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:15.899341    4206 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:15.902281    4206 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:15.902299    4206 cni.go:84] Creating CNI manager for "flannel"
	I0809 11:28:15.902302    4206 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0809 11:28:15.902307    4206 start_flags.go:319] config:
	{Name:flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0809 11:28:15.906053    4206 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:15.910155    4206 out.go:177] * Starting control plane node flannel-769000 in cluster flannel-769000
	I0809 11:28:15.917141    4206 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:28:15.917164    4206 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:28:15.917179    4206 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:15.917257    4206 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:15.917262    4206 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:28:15.917318    4206 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/flannel-769000/config.json ...
	I0809 11:28:15.917333    4206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/flannel-769000/config.json: {Name:mka63fce7ef00f98b4b7f32449ea328ac6d53fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:15.917530    4206 start.go:365] acquiring machines lock for flannel-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:15.917558    4206 start.go:369] acquired machines lock for "flannel-769000" in 22.041µs
	I0809 11:28:15.917567    4206 start.go:93] Provisioning new machine with config: &{Name:flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:15.917602    4206 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:15.922272    4206 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:15.937133    4206 start.go:159] libmachine.API.Create for "flannel-769000" (driver="qemu2")
	I0809 11:28:15.937157    4206 client.go:168] LocalClient.Create starting
	I0809 11:28:15.937212    4206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:15.937235    4206 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:15.937245    4206 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:15.937281    4206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:15.937298    4206 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:15.937304    4206 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:15.937589    4206 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:16.051686    4206 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:16.208762    4206 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:16.208768    4206 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:16.208913    4206 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:16.217733    4206 main.go:141] libmachine: STDOUT: 
	I0809 11:28:16.217750    4206 main.go:141] libmachine: STDERR: 
	I0809 11:28:16.217814    4206 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2 +20000M
	I0809 11:28:16.224968    4206 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:16.224979    4206 main.go:141] libmachine: STDERR: 
	I0809 11:28:16.224995    4206 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:16.225003    4206 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:16.225041    4206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a6:0c:f6:92:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:16.226458    4206 main.go:141] libmachine: STDOUT: 
	I0809 11:28:16.226469    4206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:16.226488    4206 client.go:171] LocalClient.Create took 289.33275ms
	I0809 11:28:18.228641    4206 start.go:128] duration metric: createHost completed in 2.311079208s
	I0809 11:28:18.228694    4206 start.go:83] releasing machines lock for "flannel-769000", held for 2.31118475s
	W0809 11:28:18.228761    4206 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:18.238502    4206 out.go:177] * Deleting "flannel-769000" in qemu2 ...
	W0809 11:28:18.258907    4206 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:18.258938    4206 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:23.260976    4206 start.go:365] acquiring machines lock for flannel-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:23.261547    4206 start.go:369] acquired machines lock for "flannel-769000" in 428.333µs
	I0809 11:28:23.261685    4206 start.go:93] Provisioning new machine with config: &{Name:flannel-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:flannel-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:23.262008    4206 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:23.271495    4206 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:23.319900    4206 start.go:159] libmachine.API.Create for "flannel-769000" (driver="qemu2")
	I0809 11:28:23.319970    4206 client.go:168] LocalClient.Create starting
	I0809 11:28:23.320104    4206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:23.320153    4206 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:23.320172    4206 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:23.320243    4206 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:23.320278    4206 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:23.320293    4206 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:23.320770    4206 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:23.446252    4206 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:23.593532    4206 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:23.593542    4206 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:23.593690    4206 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:23.602204    4206 main.go:141] libmachine: STDOUT: 
	I0809 11:28:23.602218    4206 main.go:141] libmachine: STDERR: 
	I0809 11:28:23.602267    4206 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2 +20000M
	I0809 11:28:23.609275    4206 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:23.609287    4206 main.go:141] libmachine: STDERR: 
	I0809 11:28:23.609297    4206 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:23.609303    4206 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:23.609346    4206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e1:6c:f1:af:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/flannel-769000/disk.qcow2
	I0809 11:28:23.610810    4206 main.go:141] libmachine: STDOUT: 
	I0809 11:28:23.610823    4206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:23.610836    4206 client.go:171] LocalClient.Create took 290.866917ms
	I0809 11:28:25.612942    4206 start.go:128] duration metric: createHost completed in 2.350963708s
	I0809 11:28:25.613000    4206 start.go:83] releasing machines lock for "flannel-769000", held for 2.351484583s
	W0809 11:28:25.613333    4206 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:25.622920    4206 out.go:177] 
	W0809 11:28:25.627034    4206 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:25.627058    4206 out.go:239] * 
	* 
	W0809 11:28:25.629887    4206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:25.638783    4206 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.986743083s)

                                                
                                                
-- stdout --
	* [bridge-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-769000 in cluster bridge-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:27.963654    4324 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:27.963742    4324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:27.963744    4324 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:27.963747    4324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:27.963868    4324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:27.964895    4324 out.go:303] Setting JSON to false
	I0809 11:28:27.980104    4324 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1681,"bootTime":1691604026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:27.980175    4324 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:27.985142    4324 out.go:177] * [bridge-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:27.993207    4324 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:27.997136    4324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:27.993226    4324 notify.go:220] Checking for updates...
	I0809 11:28:28.003157    4324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:28.006086    4324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:28.009152    4324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:28.012236    4324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:28.015494    4324 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:28.015544    4324 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:28.020148    4324 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:28.027061    4324 start.go:298] selected driver: qemu2
	I0809 11:28:28.027067    4324 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:28.027073    4324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:28.029045    4324 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:28.033147    4324 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:28.036270    4324 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:28.036296    4324 cni.go:84] Creating CNI manager for "bridge"
	I0809 11:28:28.036301    4324 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:28:28.036306    4324 start_flags.go:319] config:
	{Name:bridge-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:bridge-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0809 11:28:28.040564    4324 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:28.047194    4324 out.go:177] * Starting control plane node bridge-769000 in cluster bridge-769000
	I0809 11:28:28.051023    4324 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:28:28.051042    4324 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:28:28.051057    4324 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:28.051124    4324 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:28.051130    4324 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:28:28.051193    4324 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/bridge-769000/config.json ...
	I0809 11:28:28.051212    4324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/bridge-769000/config.json: {Name:mk630af051ed4c2fd35c4f3473759f97cc2d01f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:28.051415    4324 start.go:365] acquiring machines lock for bridge-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:28.051444    4324 start.go:369] acquired machines lock for "bridge-769000" in 23.667µs
	I0809 11:28:28.051453    4324 start.go:93] Provisioning new machine with config: &{Name:bridge-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:bridge-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:28.051493    4324 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:28.060013    4324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:28.075747    4324 start.go:159] libmachine.API.Create for "bridge-769000" (driver="qemu2")
	I0809 11:28:28.075767    4324 client.go:168] LocalClient.Create starting
	I0809 11:28:28.075817    4324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:28.075848    4324 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:28.075863    4324 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:28.075906    4324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:28.075924    4324 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:28.075930    4324 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:28.076246    4324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:28.190941    4324 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:28.394145    4324 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:28.394154    4324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:28.394341    4324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:28.403312    4324 main.go:141] libmachine: STDOUT: 
	I0809 11:28:28.403329    4324 main.go:141] libmachine: STDERR: 
	I0809 11:28:28.403410    4324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2 +20000M
	I0809 11:28:28.410636    4324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:28.410650    4324 main.go:141] libmachine: STDERR: 
	I0809 11:28:28.410663    4324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:28.410670    4324 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:28.410711    4324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a5:91:7c:36:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:28.412236    4324 main.go:141] libmachine: STDOUT: 
	I0809 11:28:28.412251    4324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:28.412269    4324 client.go:171] LocalClient.Create took 336.506625ms
	I0809 11:28:30.414426    4324 start.go:128] duration metric: createHost completed in 2.362961625s
	I0809 11:28:30.414513    4324 start.go:83] releasing machines lock for "bridge-769000", held for 2.363117833s
	W0809 11:28:30.414574    4324 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:30.422046    4324 out.go:177] * Deleting "bridge-769000" in qemu2 ...
	W0809 11:28:30.447574    4324 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:30.447601    4324 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:35.449730    4324 start.go:365] acquiring machines lock for bridge-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:35.450243    4324 start.go:369] acquired machines lock for "bridge-769000" in 381.792µs
	I0809 11:28:35.450354    4324 start.go:93] Provisioning new machine with config: &{Name:bridge-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.4 ClusterName:bridge-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:35.450660    4324 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:35.456583    4324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:35.501661    4324 start.go:159] libmachine.API.Create for "bridge-769000" (driver="qemu2")
	I0809 11:28:35.501711    4324 client.go:168] LocalClient.Create starting
	I0809 11:28:35.501826    4324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:35.501880    4324 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:35.501899    4324 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:35.501988    4324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:35.502022    4324 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:35.502035    4324 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:35.502522    4324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:35.626822    4324 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:35.861241    4324 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:35.861251    4324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:35.861432    4324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:35.870673    4324 main.go:141] libmachine: STDOUT: 
	I0809 11:28:35.870688    4324 main.go:141] libmachine: STDERR: 
	I0809 11:28:35.870737    4324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2 +20000M
	I0809 11:28:35.878123    4324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:35.878138    4324 main.go:141] libmachine: STDERR: 
	I0809 11:28:35.878149    4324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:35.878155    4324 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:35.878199    4324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:25:8a:47:36:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/bridge-769000/disk.qcow2
	I0809 11:28:35.879757    4324 main.go:141] libmachine: STDOUT: 
	I0809 11:28:35.879773    4324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:35.879784    4324 client.go:171] LocalClient.Create took 378.074375ms
	I0809 11:28:37.882050    4324 start.go:128] duration metric: createHost completed in 2.431356125s
	I0809 11:28:37.882118    4324 start.go:83] releasing machines lock for "bridge-769000", held for 2.431910333s
	W0809 11:28:37.882535    4324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:37.894077    4324 out.go:177] 
	W0809 11:28:37.899218    4324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:37.899334    4324 out.go:239] * 
	* 
	W0809 11:28:37.902095    4324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:37.911036    4324 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0809 11:28:42.231068    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-769000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.868276334s)

                                                
                                                
-- stdout --
	* [kubenet-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-769000 in cluster kubenet-769000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:40.055635    4445 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:40.055745    4445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:40.055748    4445 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:40.055751    4445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:40.055862    4445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:40.056860    4445 out.go:303] Setting JSON to false
	I0809 11:28:40.071808    4445 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1694,"bootTime":1691604026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:40.071865    4445 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:40.077611    4445 out.go:177] * [kubenet-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:40.085625    4445 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:40.085644    4445 notify.go:220] Checking for updates...
	I0809 11:28:40.089580    4445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:40.093542    4445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:40.096559    4445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:40.099557    4445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:40.102564    4445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:40.105914    4445 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:40.105981    4445 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:40.109593    4445 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:40.116534    4445 start.go:298] selected driver: qemu2
	I0809 11:28:40.116539    4445 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:40.116544    4445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:40.118489    4445 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:40.121616    4445 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:40.125634    4445 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:40.125660    4445 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0809 11:28:40.125664    4445 start_flags.go:319] config:
	{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0809 11:28:40.129566    4445 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:40.137488    4445 out.go:177] * Starting control plane node kubenet-769000 in cluster kubenet-769000
	I0809 11:28:40.141528    4445 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:28:40.141543    4445 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:28:40.141552    4445 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:40.141604    4445 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:40.141868    4445 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:28:40.142056    4445 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubenet-769000/config.json ...
	I0809 11:28:40.142107    4445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubenet-769000/config.json: {Name:mkc354241677b71ae6d15d3fe495c2ff8309dfb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:40.142616    4445 start.go:365] acquiring machines lock for kubenet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:40.142687    4445 start.go:369] acquired machines lock for "kubenet-769000" in 56.875µs
	I0809 11:28:40.142707    4445 start.go:93] Provisioning new machine with config: &{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:40.142748    4445 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:40.150575    4445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:40.166455    4445 start.go:159] libmachine.API.Create for "kubenet-769000" (driver="qemu2")
	I0809 11:28:40.166478    4445 client.go:168] LocalClient.Create starting
	I0809 11:28:40.166538    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:40.166563    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:40.166581    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:40.166624    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:40.166641    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:40.166649    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:40.166988    4445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:40.280077    4445 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:40.515336    4445 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:40.515348    4445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:40.515515    4445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.524806    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:40.524828    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:40.524909    4445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2 +20000M
	I0809 11:28:40.532255    4445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:40.532267    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:40.532290    4445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.532301    4445 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:40.532340    4445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:2c:7f:4e:c8:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.533769    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:40.533781    4445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:40.533801    4445 client.go:171] LocalClient.Create took 367.327084ms
	I0809 11:28:42.536006    4445 start.go:128] duration metric: createHost completed in 2.393286584s
	I0809 11:28:42.536079    4445 start.go:83] releasing machines lock for "kubenet-769000", held for 2.393436167s
	W0809 11:28:42.536143    4445 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:42.545512    4445 out.go:177] * Deleting "kubenet-769000" in qemu2 ...
	W0809 11:28:42.567844    4445 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:42.567868    4445 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:47.570094    4445 start.go:365] acquiring machines lock for kubenet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:47.570538    4445 start.go:369] acquired machines lock for "kubenet-769000" in 338.167µs
	I0809 11:28:47.570658    4445 start.go:93] Provisioning new machine with config: &{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:47.570971    4445 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:47.577658    4445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:47.624881    4445 start.go:159] libmachine.API.Create for "kubenet-769000" (driver="qemu2")
	I0809 11:28:47.624935    4445 client.go:168] LocalClient.Create starting
	I0809 11:28:47.625035    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:47.625089    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:47.625110    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:47.625183    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:47.625217    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:47.625233    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:47.625698    4445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:47.752513    4445 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:47.837692    4445 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:47.837700    4445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:47.837826    4445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:47.846206    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:47.846223    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:47.846286    4445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2 +20000M
	I0809 11:28:47.853539    4445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:47.853551    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:47.853565    4445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:47.853574    4445 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:47.853618    4445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:97:96:9e:66:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:47.855052    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:47.855064    4445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:47.855078    4445 client.go:171] LocalClient.Create took 230.142458ms
	I0809 11:28:49.857198    4445 start.go:128] duration metric: createHost completed in 2.286249125s
	I0809 11:28:49.857281    4445 start.go:83] releasing machines lock for "kubenet-769000", held for 2.286775667s
	W0809 11:28:49.857705    4445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:49.870329    4445 out.go:177] 
	W0809 11:28:49.873309    4445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:49.873449    4445 out.go:239] * 
	* 
	W0809 11:28:49.876291    4445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:49.885250    4445 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe: permission denied (8.6035ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe: permission denied (7.768834ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe start -p stopped-upgrade-181000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe: permission denied (7.260542ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.41147432.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-181000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-181000: exit status 85 (114.697291ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo docker                        | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo cat                           | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo                               | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo find                          | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-769000 sudo crio                          | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-769000                                    | flannel-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT | 09 Aug 23 11:28 PDT |
	| start   | -p bridge-769000 --memory=3072                       | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo crictl                         | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo crictl                         | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo find                           | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo ip a s                         | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	| ssh     | -p bridge-769000 sudo ip r s                         | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | iptables-save                                        |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo iptables                       | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | -t nat -L -n -v                                      |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo docker                         | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo cat                            | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo                                | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo find                           | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-769000 sudo crio                           | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-769000                                     | bridge-769000  | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT | 09 Aug 23 11:28 PDT |
	| start   | -p kubenet-769000                                    | kubenet-769000 | jenkins | v1.31.1 | 09 Aug 23 11:28 PDT |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                |         |         |                     |                     |
	|         | --driver=qemu2                                       |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:28:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:28:40.055635    4445 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:40.055745    4445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:40.055748    4445 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:40.055751    4445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:40.055862    4445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:40.056860    4445 out.go:303] Setting JSON to false
	I0809 11:28:40.071808    4445 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1694,"bootTime":1691604026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:40.071865    4445 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:40.077611    4445 out.go:177] * [kubenet-769000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:40.085625    4445 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:40.085644    4445 notify.go:220] Checking for updates...
	I0809 11:28:40.089580    4445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:40.093542    4445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:40.096559    4445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:40.099557    4445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:40.102564    4445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:40.105914    4445 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:40.105981    4445 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:40.109593    4445 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:40.116534    4445 start.go:298] selected driver: qemu2
	I0809 11:28:40.116539    4445 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:40.116544    4445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:40.118489    4445 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:40.121616    4445 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:40.125634    4445 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:40.125660    4445 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0809 11:28:40.125664    4445 start_flags.go:319] config:
	{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0809 11:28:40.129566    4445 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:40.137488    4445 out.go:177] * Starting control plane node kubenet-769000 in cluster kubenet-769000
	I0809 11:28:40.141528    4445 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:28:40.141543    4445 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:28:40.141552    4445 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:40.141604    4445 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:40.141868    4445 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:28:40.142056    4445 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubenet-769000/config.json ...
	I0809 11:28:40.142107    4445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/kubenet-769000/config.json: {Name:mkc354241677b71ae6d15d3fe495c2ff8309dfb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:40.142616    4445 start.go:365] acquiring machines lock for kubenet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:40.142687    4445 start.go:369] acquired machines lock for "kubenet-769000" in 56.875µs
	I0809 11:28:40.142707    4445 start.go:93] Provisioning new machine with config: &{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:40.142748    4445 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:40.150575    4445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0809 11:28:40.166455    4445 start.go:159] libmachine.API.Create for "kubenet-769000" (driver="qemu2")
	I0809 11:28:40.166478    4445 client.go:168] LocalClient.Create starting
	I0809 11:28:40.166538    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:40.166563    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:40.166581    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:40.166624    4445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:40.166641    4445 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:40.166649    4445 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:40.166988    4445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:40.280077    4445 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:40.515336    4445 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:40.515348    4445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:40.515515    4445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.524806    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:40.524828    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:40.524909    4445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2 +20000M
	I0809 11:28:40.532255    4445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:40.532267    4445 main.go:141] libmachine: STDERR: 
	I0809 11:28:40.532290    4445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.532301    4445 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:40.532340    4445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:2c:7f:4e:c8:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/kubenet-769000/disk.qcow2
	I0809 11:28:40.533769    4445 main.go:141] libmachine: STDOUT: 
	I0809 11:28:40.533781    4445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:40.533801    4445 client.go:171] LocalClient.Create took 367.327084ms
	I0809 11:28:42.536006    4445 start.go:128] duration metric: createHost completed in 2.393286584s
	I0809 11:28:42.536079    4445 start.go:83] releasing machines lock for "kubenet-769000", held for 2.393436167s
	W0809 11:28:42.536143    4445 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:42.545512    4445 out.go:177] * Deleting "kubenet-769000" in qemu2 ...
	W0809 11:28:42.567844    4445 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:42.567868    4445 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:47.570094    4445 start.go:365] acquiring machines lock for kubenet-769000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:47.570538    4445 start.go:369] acquired machines lock for "kubenet-769000" in 338.167µs
	I0809 11:28:47.570658    4445 start.go:93] Provisioning new machine with config: &{Name:kubenet-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:kubenet-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:47.570971    4445 start.go:125] createHost starting for "" (driver="qemu2")
	
	* 
	* Profile "stopped-upgrade-181000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-181000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.051226125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-469000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-469000 in cluster old-k8s-version-469000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-469000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:48.516547    4480 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:48.516663    4480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:48.516666    4480 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:48.516668    4480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:48.516783    4480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:48.517831    4480 out.go:303] Setting JSON to false
	I0809 11:28:48.532989    4480 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1702,"bootTime":1691604026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:48.533081    4480 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:48.537498    4480 out.go:177] * [old-k8s-version-469000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:48.544684    4480 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:48.547619    4480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:48.544722    4480 notify.go:220] Checking for updates...
	I0809 11:28:48.554659    4480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:48.557612    4480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:48.560590    4480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:48.563666    4480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:48.566830    4480 config.go:182] Loaded profile config "kubenet-769000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:48.566893    4480 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:48.566951    4480 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:48.571568    4480 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:48.577491    4480 start.go:298] selected driver: qemu2
	I0809 11:28:48.577496    4480 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:48.577501    4480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:48.579380    4480 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:48.582605    4480 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:48.585633    4480 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:48.585656    4480 cni.go:84] Creating CNI manager for ""
	I0809 11:28:48.585662    4480 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:28:48.585666    4480 start_flags.go:319] config:
	{Name:old-k8s-version-469000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0}
	I0809 11:28:48.589838    4480 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:48.592581    4480 out.go:177] * Starting control plane node old-k8s-version-469000 in cluster old-k8s-version-469000
	I0809 11:28:48.600647    4480 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:28:48.600666    4480 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:28:48.600674    4480 cache.go:57] Caching tarball of preloaded images
	I0809 11:28:48.600732    4480 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:28:48.600746    4480 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0809 11:28:48.600818    4480 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/old-k8s-version-469000/config.json ...
	I0809 11:28:48.600831    4480 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/old-k8s-version-469000/config.json: {Name:mkcec2a47a0ed3dd296ad29d947702b9ae605ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:48.601042    4480 start.go:365] acquiring machines lock for old-k8s-version-469000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:49.857391    4480 start.go:369] acquired machines lock for "old-k8s-version-469000" in 1.256354666s
	I0809 11:28:49.857583    4480 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:49.857769    4480 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:49.867245    4480 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:28:49.915600    4480 start.go:159] libmachine.API.Create for "old-k8s-version-469000" (driver="qemu2")
	I0809 11:28:49.915652    4480 client.go:168] LocalClient.Create starting
	I0809 11:28:49.915767    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:49.915814    4480 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:49.915836    4480 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:49.915915    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:49.915948    4480 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:49.915972    4480 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:49.916669    4480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:50.047568    4480 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:50.083581    4480 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:50.083591    4480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:50.083763    4480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:50.093142    4480 main.go:141] libmachine: STDOUT: 
	I0809 11:28:50.093162    4480 main.go:141] libmachine: STDERR: 
	I0809 11:28:50.093222    4480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2 +20000M
	I0809 11:28:50.101188    4480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:50.101209    4480 main.go:141] libmachine: STDERR: 
	I0809 11:28:50.101235    4480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:50.101247    4480 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:50.101289    4480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:b5:09:71:31:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:50.103011    4480 main.go:141] libmachine: STDOUT: 
	I0809 11:28:50.103023    4480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:50.103043    4480 client.go:171] LocalClient.Create took 187.39025ms
	I0809 11:28:52.105066    4480 start.go:128] duration metric: createHost completed in 2.247340208s
	I0809 11:28:52.105083    4480 start.go:83] releasing machines lock for "old-k8s-version-469000", held for 2.247720917s
	W0809 11:28:52.105098    4480 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:52.216637    4480 out.go:177] * Deleting "old-k8s-version-469000" in qemu2 ...
	W0809 11:28:52.228370    4480 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:52.228381    4480 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:57.228467    4480 start.go:365] acquiring machines lock for old-k8s-version-469000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:57.228962    4480 start.go:369] acquired machines lock for "old-k8s-version-469000" in 414.709µs
	I0809 11:28:57.229092    4480 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:57.229465    4480 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:57.240210    4480 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:28:57.285997    4480 start.go:159] libmachine.API.Create for "old-k8s-version-469000" (driver="qemu2")
	I0809 11:28:57.286035    4480 client.go:168] LocalClient.Create starting
	I0809 11:28:57.286125    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:57.286189    4480 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:57.286209    4480 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:57.286277    4480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:57.286313    4480 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:57.286330    4480 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:57.286856    4480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:57.413085    4480 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:57.487517    4480 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:57.487527    4480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:57.487667    4480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:57.496202    4480 main.go:141] libmachine: STDOUT: 
	I0809 11:28:57.496216    4480 main.go:141] libmachine: STDERR: 
	I0809 11:28:57.496291    4480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2 +20000M
	I0809 11:28:57.503640    4480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:57.503653    4480 main.go:141] libmachine: STDERR: 
	I0809 11:28:57.503672    4480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:57.503679    4480 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:57.503721    4480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:42:fb:0b:5d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:28:57.505220    4480 main.go:141] libmachine: STDOUT: 
	I0809 11:28:57.505232    4480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:57.505244    4480 client.go:171] LocalClient.Create took 219.21ms
	I0809 11:28:59.505920    4480 start.go:128] duration metric: createHost completed in 2.27648625s
	I0809 11:28:59.505977    4480 start.go:83] releasing machines lock for "old-k8s-version-469000", held for 2.2770445s
	W0809 11:28:59.506409    4480 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-469000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:59.517886    4480 out.go:177] 
	W0809 11:28:59.521039    4480 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:28:59.521146    4480 out.go:239] * 
	* 
	W0809 11:28:59.524201    4480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:28:59.531931    4480 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (52.982917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0: exit status 80 (9.972227167s)

                                                
                                                
-- stdout --
	* [no-preload-905000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-905000 in cluster no-preload-905000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-905000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:51.991171    4586 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:51.991281    4586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:51.991285    4586 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:51.991287    4586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:51.991399    4586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:51.992365    4586 out.go:303] Setting JSON to false
	I0809 11:28:52.007628    4586 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1705,"bootTime":1691604026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:52.007689    4586 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:52.012293    4586 out.go:177] * [no-preload-905000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:28:52.020197    4586 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:28:52.020285    4586 notify.go:220] Checking for updates...
	I0809 11:28:52.024244    4586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:28:52.025343    4586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:28:52.028211    4586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:28:52.032218    4586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:28:52.033651    4586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:28:52.037548    4586 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:28:52.037617    4586 config.go:182] Loaded profile config "old-k8s-version-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0809 11:28:52.037652    4586 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:28:52.042218    4586 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:28:52.050244    4586 start.go:298] selected driver: qemu2
	I0809 11:28:52.050250    4586 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:28:52.050256    4586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:28:52.052130    4586 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:28:52.056238    4586 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:28:52.057590    4586 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:28:52.057610    4586 cni.go:84] Creating CNI manager for ""
	I0809 11:28:52.057616    4586 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:28:52.057620    4586 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:28:52.057625    4586 start_flags.go:319] config:
	{Name:no-preload-905000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:no-preload-905000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock
: SSHAgentPID:0}
	I0809 11:28:52.061530    4586 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.069269    4586 out.go:177] * Starting control plane node no-preload-905000 in cluster no-preload-905000
	I0809 11:28:52.073215    4586 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:28:52.073304    4586 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/no-preload-905000/config.json ...
	I0809 11:28:52.073322    4586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/no-preload-905000/config.json: {Name:mk5337adf336c6c8a1291ebc3256c2fda885c4ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:28:52.073334    4586 cache.go:107] acquiring lock: {Name:mk656c5a064883838c5589f840c1394e16112ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073324    4586 cache.go:107] acquiring lock: {Name:mkab3054a16289a4aefcfbb61ea6380445295ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073356    4586 cache.go:107] acquiring lock: {Name:mke1be9c1842e360596210f2d984414b1d7a147e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073390    4586 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0809 11:28:52.073397    4586 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 75.417µs
	I0809 11:28:52.073415    4586 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0809 11:28:52.073424    4586 cache.go:107] acquiring lock: {Name:mk693c532da01cdcc3e4ecc917d6770916d2cec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073483    4586 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.0-rc.0
	I0809 11:28:52.073505    4586 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.0
	I0809 11:28:52.073544    4586 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0809 11:28:52.073551    4586 cache.go:107] acquiring lock: {Name:mk53a58503f068a6568f2033462f7ac805c3d51e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073532    4586 cache.go:107] acquiring lock: {Name:mk0f3f3f1712ae4b4ff51e013c17d65fda2b83bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.073566    4586 start.go:365] acquiring machines lock for no-preload-905000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:52.073914    4586 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0809 11:28:52.073986    4586 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.0-rc.0
	I0809 11:28:52.073993    4586 cache.go:107] acquiring lock: {Name:mk40b9da5afe06ff2810670f28c0d2d05716e9fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.074084    4586 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.0-rc.0
	I0809 11:28:52.074083    4586 cache.go:107] acquiring lock: {Name:mk1393aec1c008b9fcc8d1e7bc3820b231e85da6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:28:52.074206    4586 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0809 11:28:52.082078    4586 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.0-rc.0
	I0809 11:28:52.082113    4586 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.0-rc.0
	I0809 11:28:52.082828    4586 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0809 11:28:52.082884    4586 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0809 11:28:52.082903    4586 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.0-rc.0
	I0809 11:28:52.082971    4586 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0809 11:28:52.083025    4586 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.0-rc.0
	I0809 11:28:52.105182    4586 start.go:369] acquired machines lock for "no-preload-905000" in 31.378959ms
	I0809 11:28:52.105206    4586 start.go:93] Provisioning new machine with config: &{Name:no-preload-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:no-preload-905000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:52.105375    4586 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:52.207276    4586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:28:52.221279    4586 start.go:159] libmachine.API.Create for "no-preload-905000" (driver="qemu2")
	I0809 11:28:52.221306    4586 client.go:168] LocalClient.Create starting
	I0809 11:28:52.221376    4586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:52.221403    4586 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:52.221415    4586 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:52.221463    4586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:52.221482    4586 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:52.221490    4586 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:52.224542    4586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:52.352372    4586 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:52.418579    4586 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:52.418599    4586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:52.418796    4586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:52.427358    4586 main.go:141] libmachine: STDOUT: 
	I0809 11:28:52.427375    4586 main.go:141] libmachine: STDERR: 
	I0809 11:28:52.427436    4586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2 +20000M
	I0809 11:28:52.435209    4586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:52.435233    4586 main.go:141] libmachine: STDERR: 
	I0809 11:28:52.435260    4586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:52.435271    4586 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:52.435312    4586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:2f:c4:9f:87:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:52.436854    4586 main.go:141] libmachine: STDOUT: 
	I0809 11:28:52.436867    4586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:52.436883    4586 client.go:171] LocalClient.Create took 215.576666ms
	I0809 11:28:52.608719    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0
	I0809 11:28:52.778128    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0
	I0809 11:28:52.914826    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0809 11:28:53.103828    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0809 11:28:53.298004    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0
	I0809 11:28:53.518844    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0809 11:28:53.713961    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0809 11:28:53.713978    4586 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.640534s
	I0809 11:28:53.713990    4586 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0809 11:28:53.756259    4586 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0
	I0809 11:28:54.437087    4586 start.go:128] duration metric: createHost completed in 2.331731458s
	I0809 11:28:54.437143    4586 start.go:83] releasing machines lock for "no-preload-905000", held for 2.332000833s
	W0809 11:28:54.437223    4586 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:54.445301    4586 out.go:177] * Deleting "no-preload-905000" in qemu2 ...
	W0809 11:28:54.466691    4586 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:28:54.466728    4586 start.go:687] Will try again in 5 seconds ...
	I0809 11:28:54.525948    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0809 11:28:54.526003    4586 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.45202075s
	I0809 11:28:54.526031    4586 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0809 11:28:55.432840    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0 exists
	I0809 11:28:55.432896    4586 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0" took 3.359477542s
	I0809 11:28:55.432942    4586 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0 succeeded
	I0809 11:28:56.226168    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0 exists
	I0809 11:28:56.226207    4586 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0" took 4.152314s
	I0809 11:28:56.226239    4586 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0 succeeded
	I0809 11:28:56.975216    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0 exists
	I0809 11:28:56.975277    4586 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0" took 4.902071542s
	I0809 11:28:56.975309    4586 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0 succeeded
	I0809 11:28:57.390529    4586 cache.go:157] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0 exists
	I0809 11:28:57.390545    4586 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0" took 5.317352375s
	I0809 11:28:57.390555    4586 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0 succeeded
	I0809 11:28:59.466769    4586 start.go:365] acquiring machines lock for no-preload-905000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:28:59.506056    4586 start.go:369] acquired machines lock for "no-preload-905000" in 39.210625ms
	I0809 11:28:59.506273    4586 start.go:93] Provisioning new machine with config: &{Name:no-preload-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:no-preload-905000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:28:59.506480    4586 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:28:59.513960    4586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:28:59.562324    4586 start.go:159] libmachine.API.Create for "no-preload-905000" (driver="qemu2")
	I0809 11:28:59.562370    4586 client.go:168] LocalClient.Create starting
	I0809 11:28:59.562501    4586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:28:59.562559    4586 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:59.562584    4586 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:59.562660    4586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:28:59.562693    4586 main.go:141] libmachine: Decoding PEM data...
	I0809 11:28:59.562711    4586 main.go:141] libmachine: Parsing certificate...
	I0809 11:28:59.563244    4586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:28:59.696626    4586 main.go:141] libmachine: Creating SSH key...
	I0809 11:28:59.862671    4586 main.go:141] libmachine: Creating Disk image...
	I0809 11:28:59.862680    4586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:28:59.862828    4586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:59.871518    4586 main.go:141] libmachine: STDOUT: 
	I0809 11:28:59.871537    4586 main.go:141] libmachine: STDERR: 
	I0809 11:28:59.871605    4586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2 +20000M
	I0809 11:28:59.879851    4586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:28:59.879869    4586 main.go:141] libmachine: STDERR: 
	I0809 11:28:59.879883    4586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:59.879891    4586 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:28:59.879930    4586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:50:8a:c4:04:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:28:59.881692    4586 main.go:141] libmachine: STDOUT: 
	I0809 11:28:59.881707    4586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:28:59.881727    4586 client.go:171] LocalClient.Create took 319.36075ms
	I0809 11:29:01.882465    4586 start.go:128] duration metric: createHost completed in 2.375993125s
	I0809 11:29:01.882523    4586 start.go:83] releasing machines lock for "no-preload-905000", held for 2.376502333s
	W0809 11:29:01.882828    4586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:01.895342    4586 out.go:177] 
	W0809 11:29:01.903429    4586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:01.903460    4586 out.go:239] * 
	* 
	W0809 11:29:01.906224    4586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:01.918264    4586 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (60.854125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-469000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-469000 create -f testdata/busybox.yaml: exit status 1 (30.074208ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-469000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (32.012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (34.476208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-469000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-469000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-469000 describe deploy/metrics-server -n kube-system: exit status 1 (26.038417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-469000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-469000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (30.4045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (7.03139475s)

                                                
                                                
-- stdout --
	* [old-k8s-version-469000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-469000 in cluster old-k8s-version-469000
	* Restarting existing qemu2 VM for "old-k8s-version-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-469000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:28:59.974956    4722 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:28:59.975066    4722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:59.975069    4722 out.go:309] Setting ErrFile to fd 2...
	I0809 11:28:59.975071    4722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:28:59.975177    4722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:28:59.976108    4722 out.go:303] Setting JSON to false
	I0809 11:28:59.991134    4722 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1713,"bootTime":1691604026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:28:59.991211    4722 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:28:59.995240    4722 out.go:177] * [old-k8s-version-469000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:00.006185    4722 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:00.002289    4722 notify.go:220] Checking for updates...
	I0809 11:29:00.014247    4722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:00.017270    4722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:00.020177    4722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:00.028225    4722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:00.032185    4722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:00.035454    4722 config.go:182] Loaded profile config "old-k8s-version-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0809 11:29:00.038147    4722 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0809 11:29:00.042232    4722 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:00.046058    4722 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:29:00.053278    4722 start.go:298] selected driver: qemu2
	I0809 11:29:00.053283    4722 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:00.053339    4722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:00.055584    4722 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:00.055616    4722 cni.go:84] Creating CNI manager for ""
	I0809 11:29:00.055623    4722 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:29:00.055628    4722 start_flags.go:319] config:
	{Name:old-k8s-version-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-469000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:00.059815    4722 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:00.068194    4722 out.go:177] * Starting control plane node old-k8s-version-469000 in cluster old-k8s-version-469000
	I0809 11:29:00.069912    4722 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:29:00.069927    4722 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:29:00.069935    4722 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:00.069987    4722 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:00.069993    4722 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0809 11:29:00.070077    4722 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/old-k8s-version-469000/config.json ...
	I0809 11:29:00.070448    4722 start.go:365] acquiring machines lock for old-k8s-version-469000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:01.882740    4722 start.go:369] acquired machines lock for "old-k8s-version-469000" in 1.812303459s
	I0809 11:29:01.882892    4722 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:01.882923    4722 fix.go:54] fixHost starting: 
	I0809 11:29:01.883751    4722 fix.go:102] recreateIfNeeded on old-k8s-version-469000: state=Stopped err=<nil>
	W0809 11:29:01.883798    4722 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:01.899303    4722 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-469000" ...
	I0809 11:29:01.907564    4722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:42:fb:0b:5d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:29:01.917098    4722 main.go:141] libmachine: STDOUT: 
	I0809 11:29:01.917168    4722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:01.917306    4722 fix.go:56] fixHost completed within 34.392209ms
	I0809 11:29:01.917326    4722 start.go:83] releasing machines lock for "old-k8s-version-469000", held for 34.557417ms
	W0809 11:29:01.917399    4722 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:01.917617    4722 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:01.917636    4722 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:06.919742    4722 start.go:365] acquiring machines lock for old-k8s-version-469000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:06.920177    4722 start.go:369] acquired machines lock for "old-k8s-version-469000" in 334.542µs
	I0809 11:29:06.920311    4722 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:06.920331    4722 fix.go:54] fixHost starting: 
	I0809 11:29:06.921081    4722 fix.go:102] recreateIfNeeded on old-k8s-version-469000: state=Stopped err=<nil>
	W0809 11:29:06.921108    4722 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:06.925708    4722 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-469000" ...
	I0809 11:29:06.933814    4722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:42:fb:0b:5d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/old-k8s-version-469000/disk.qcow2
	I0809 11:29:06.943188    4722 main.go:141] libmachine: STDOUT: 
	I0809 11:29:06.943244    4722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:06.943364    4722 fix.go:56] fixHost completed within 23.007333ms
	I0809 11:29:06.943381    4722 start.go:83] releasing machines lock for "old-k8s-version-469000", held for 23.183125ms
	W0809 11:29:06.943589    4722 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-469000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:06.951577    4722 out.go:177] 
	W0809 11:29:06.955626    4722 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:06.955656    4722 out.go:239] * 
	* 
	W0809 11:29:06.957656    4722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:06.967479    4722 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-469000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (64.692417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-905000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-905000 create -f testdata/busybox.yaml: exit status 1 (28.547958ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-905000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (27.942833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (27.237792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-905000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-905000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-905000 describe deploy/metrics-server -n kube-system: exit status 1 (24.92075ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-905000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-905000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (27.605209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0: exit status 80 (5.155604125s)

                                                
                                                
-- stdout --
	* [no-preload-905000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-905000 in cluster no-preload-905000
	* Restarting existing qemu2 VM for "no-preload-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-905000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:02.363350    4747 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:02.363464    4747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:02.363466    4747 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:02.363469    4747 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:02.363614    4747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:02.364583    4747 out.go:303] Setting JSON to false
	I0809 11:29:02.379446    4747 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1716,"bootTime":1691604026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:02.379518    4747 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:02.384379    4747 out.go:177] * [no-preload-905000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:02.391399    4747 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:02.391452    4747 notify.go:220] Checking for updates...
	I0809 11:29:02.395346    4747 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:02.398330    4747 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:02.401328    4747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:02.404312    4747 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:02.407342    4747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:02.410545    4747 config.go:182] Loaded profile config "no-preload-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.0
	I0809 11:29:02.410775    4747 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:02.415317    4747 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:29:02.422592    4747 start.go:298] selected driver: qemu2
	I0809 11:29:02.422636    4747 start.go:901] validating driver "qemu2" against &{Name:no-preload-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:no-preload-905000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:02.422722    4747 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:02.424947    4747 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:02.424974    4747 cni.go:84] Creating CNI manager for ""
	I0809 11:29:02.424980    4747 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:02.424985    4747 start_flags.go:319] config:
	{Name:no-preload-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:no-preload-905000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:02.428797    4747 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.436114    4747 out.go:177] * Starting control plane node no-preload-905000 in cluster no-preload-905000
	I0809 11:29:02.440331    4747 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:29:02.440398    4747 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/no-preload-905000/config.json ...
	I0809 11:29:02.440419    4747 cache.go:107] acquiring lock: {Name:mkab3054a16289a4aefcfbb61ea6380445295ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440420    4747 cache.go:107] acquiring lock: {Name:mk656c5a064883838c5589f840c1394e16112ae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440437    4747 cache.go:107] acquiring lock: {Name:mke1be9c1842e360596210f2d984414b1d7a147e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440478    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0809 11:29:02.440486    4747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.5µs
	I0809 11:29:02.440492    4747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0809 11:29:02.440485    4747 cache.go:107] acquiring lock: {Name:mk0f3f3f1712ae4b4ff51e013c17d65fda2b83bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440492    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0 exists
	I0809 11:29:02.440492    4747 cache.go:107] acquiring lock: {Name:mk40b9da5afe06ff2810670f28c0d2d05716e9fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440504    4747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0" took 86.959µs
	I0809 11:29:02.440566    4747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.0-rc.0 succeeded
	I0809 11:29:02.440527    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0 exists
	I0809 11:29:02.440574    4747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0" took 89.125µs
	I0809 11:29:02.440578    4747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.0-rc.0 succeeded
	I0809 11:29:02.440477    4747 cache.go:107] acquiring lock: {Name:mk53a58503f068a6568f2033462f7ac805c3d51e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440580    4747 cache.go:107] acquiring lock: {Name:mk1393aec1c008b9fcc8d1e7bc3820b231e85da6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440544    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0 exists
	I0809 11:29:02.440597    4747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0" took 105.625µs
	I0809 11:29:02.440601    4747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.0-rc.0 succeeded
	I0809 11:29:02.440551    4747 cache.go:107] acquiring lock: {Name:mk693c532da01cdcc3e4ecc917d6770916d2cec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:02.440609    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0809 11:29:02.440552    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0 exists
	I0809 11:29:02.440627    4747 cache.go:115] /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0809 11:29:02.440633    4747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 60.667µs
	I0809 11:29:02.440643    4747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0809 11:29:02.440640    4747 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0809 11:29:02.440641    4747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.0-rc.0" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0" took 208.042µs
	I0809 11:29:02.440612    4747 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 135.958µs
	I0809 11:29:02.440652    4747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.0-rc.0 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.0-rc.0 succeeded
	I0809 11:29:02.440660    4747 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0809 11:29:02.440719    4747 start.go:365] acquiring machines lock for no-preload-905000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:02.440747    4747 start.go:369] acquired machines lock for "no-preload-905000" in 21.417µs
	I0809 11:29:02.440755    4747 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:02.440760    4747 fix.go:54] fixHost starting: 
	I0809 11:29:02.440862    4747 fix.go:102] recreateIfNeeded on no-preload-905000: state=Stopped err=<nil>
	W0809 11:29:02.440868    4747 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:02.448278    4747 out.go:177] * Restarting existing qemu2 VM for "no-preload-905000" ...
	I0809 11:29:02.452315    4747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:50:8a:c4:04:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:29:02.452890    4747 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0809 11:29:02.454210    4747 main.go:141] libmachine: STDOUT: 
	I0809 11:29:02.454230    4747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:02.454256    4747 fix.go:56] fixHost completed within 13.4975ms
	I0809 11:29:02.454265    4747 start.go:83] releasing machines lock for "no-preload-905000", held for 13.51475ms
	W0809 11:29:02.454272    4747 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:02.454316    4747 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:02.454325    4747 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:02.967744    4747 cache.go:162] opening:  /Users/jenkins/minikube-integration/17011-995/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0809 11:29:07.454514    4747 start.go:365] acquiring machines lock for no-preload-905000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:07.454582    4747 start.go:369] acquired machines lock for "no-preload-905000" in 54.083µs
	I0809 11:29:07.454605    4747 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:07.454609    4747 fix.go:54] fixHost starting: 
	I0809 11:29:07.454739    4747 fix.go:102] recreateIfNeeded on no-preload-905000: state=Stopped err=<nil>
	W0809 11:29:07.454743    4747 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:07.458781    4747 out.go:177] * Restarting existing qemu2 VM for "no-preload-905000" ...
	I0809 11:29:07.462824    4747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:50:8a:c4:04:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/no-preload-905000/disk.qcow2
	I0809 11:29:07.464925    4747 main.go:141] libmachine: STDOUT: 
	I0809 11:29:07.464942    4747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:07.464964    4747 fix.go:56] fixHost completed within 10.354833ms
	I0809 11:29:07.464970    4747 start.go:83] releasing machines lock for "no-preload-905000", held for 10.383333ms
	W0809 11:29:07.465043    4747 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-905000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:07.469750    4747 out.go:177] 
	W0809 11:29:07.473832    4747 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:07.473838    4747 out.go:239] * 
	* 
	W0809 11:29:07.474306    4747 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:07.487771    4747 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-905000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (29.423125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-469000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (31.106625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-469000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-469000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-469000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.366666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-469000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-469000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (27.662833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-469000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-469000 "sudo crictl images -o json": exit status 89 (36.382ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-469000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-469000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-469000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (27.180208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-469000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-469000 --alsologtostderr -v=1: exit status 89 (38.444292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-469000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:07.221847    4780 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:07.222201    4780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.222205    4780 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:07.222218    4780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.222360    4780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:07.222567    4780 out.go:303] Setting JSON to false
	I0809 11:29:07.222577    4780 mustload.go:65] Loading cluster: old-k8s-version-469000
	I0809 11:29:07.222766    4780 config.go:182] Loaded profile config "old-k8s-version-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0809 11:29:07.225871    4780 out.go:177] * The control plane node must be running for this command
	I0809 11:29:07.229910    4780 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-469000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-469000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (28.700041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (27.572042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-469000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-905000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (29.229875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-905000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-905000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-905000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.634417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-905000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-905000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (29.490042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-905000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-905000 "sudo crictl images -o json": exit status 89 (40.335375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-905000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-905000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-905000"
start_stop_delete_test.go:304: v1.28.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (29.350958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (9.807354583s)

                                                
                                                
-- stdout --
	* [embed-certs-470000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-470000 in cluster embed-certs-470000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-470000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:07.693712    4814 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:07.693834    4814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.693837    4814 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:07.693839    4814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.693971    4814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:07.695213    4814 out.go:303] Setting JSON to false
	I0809 11:29:07.711815    4814 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1721,"bootTime":1691604026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:07.711876    4814 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:07.720689    4814 out.go:177] * [embed-certs-470000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:07.730742    4814 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:07.727867    4814 notify.go:220] Checking for updates...
	I0809 11:29:07.737802    4814 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:07.741758    4814 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:07.744787    4814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:07.747812    4814 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:07.750736    4814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:07.755105    4814 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:07.755168    4814 config.go:182] Loaded profile config "no-preload-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.0
	I0809 11:29:07.755211    4814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:07.760848    4814 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:29:07.767784    4814 start.go:298] selected driver: qemu2
	I0809 11:29:07.767792    4814 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:29:07.767798    4814 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:07.769736    4814 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:29:07.773805    4814 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:29:07.776958    4814 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:07.776994    4814 cni.go:84] Creating CNI manager for ""
	I0809 11:29:07.777003    4814 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:07.777010    4814 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:29:07.777021    4814 start_flags.go:319] config:
	{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0}
	I0809 11:29:07.781850    4814 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:07.788778    4814 out.go:177] * Starting control plane node embed-certs-470000 in cluster embed-certs-470000
	I0809 11:29:07.794788    4814 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:29:07.794811    4814 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:29:07.794827    4814 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:07.794929    4814 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:07.794935    4814 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:29:07.795017    4814 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/embed-certs-470000/config.json ...
	I0809 11:29:07.795029    4814 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/embed-certs-470000/config.json: {Name:mkd0256f3414e9a06690567bc490ba110227fef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:29:07.795203    4814 start.go:365] acquiring machines lock for embed-certs-470000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:07.795227    4814 start.go:369] acquired machines lock for "embed-certs-470000" in 18.625µs
	I0809 11:29:07.795234    4814 start.go:93] Provisioning new machine with config: &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:07.795281    4814 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:07.802803    4814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:07.817271    4814 start.go:159] libmachine.API.Create for "embed-certs-470000" (driver="qemu2")
	I0809 11:29:07.817307    4814 client.go:168] LocalClient.Create starting
	I0809 11:29:07.817366    4814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:07.817392    4814 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:07.817403    4814 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:07.817453    4814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:07.817472    4814 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:07.817478    4814 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:07.817822    4814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:07.972999    4814 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:08.012386    4814 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:08.012395    4814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:08.012544    4814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:08.025028    4814 main.go:141] libmachine: STDOUT: 
	I0809 11:29:08.025052    4814 main.go:141] libmachine: STDERR: 
	I0809 11:29:08.025109    4814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2 +20000M
	I0809 11:29:08.032940    4814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:08.032958    4814 main.go:141] libmachine: STDERR: 
	I0809 11:29:08.032975    4814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:08.032981    4814 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:08.033028    4814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:05:7b:b8:57:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:08.034745    4814 main.go:141] libmachine: STDOUT: 
	I0809 11:29:08.034760    4814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:08.034781    4814 client.go:171] LocalClient.Create took 217.475625ms
	I0809 11:29:10.036918    4814 start.go:128] duration metric: createHost completed in 2.2416665s
	I0809 11:29:10.037018    4814 start.go:83] releasing machines lock for "embed-certs-470000", held for 2.241838167s
	W0809 11:29:10.037126    4814 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:10.052173    4814 out.go:177] * Deleting "embed-certs-470000" in qemu2 ...
	W0809 11:29:10.068213    4814 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:10.068243    4814 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:15.069791    4814 start.go:365] acquiring machines lock for embed-certs-470000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:15.070208    4814 start.go:369] acquired machines lock for "embed-certs-470000" in 315.958µs
	I0809 11:29:15.070374    4814 start.go:93] Provisioning new machine with config: &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:15.070668    4814 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:15.080201    4814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:15.126989    4814 start.go:159] libmachine.API.Create for "embed-certs-470000" (driver="qemu2")
	I0809 11:29:15.127027    4814 client.go:168] LocalClient.Create starting
	I0809 11:29:15.127152    4814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:15.127218    4814 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:15.127236    4814 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:15.127325    4814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:15.127365    4814 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:15.127383    4814 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:15.127919    4814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:15.255491    4814 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:15.413062    4814 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:15.413068    4814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:15.413221    4814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:15.422290    4814 main.go:141] libmachine: STDOUT: 
	I0809 11:29:15.422303    4814 main.go:141] libmachine: STDERR: 
	I0809 11:29:15.422367    4814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2 +20000M
	I0809 11:29:15.429629    4814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:15.429639    4814 main.go:141] libmachine: STDERR: 
	I0809 11:29:15.429651    4814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:15.429659    4814 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:15.429700    4814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4e:6e:7e:96:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:15.431147    4814 main.go:141] libmachine: STDOUT: 
	I0809 11:29:15.431159    4814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:15.431170    4814 client.go:171] LocalClient.Create took 304.14575ms
	I0809 11:29:17.433265    4814 start.go:128] duration metric: createHost completed in 2.36263275s
	I0809 11:29:17.433424    4814 start.go:83] releasing machines lock for "embed-certs-470000", held for 2.363142125s
	W0809 11:29:17.433761    4814 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:17.447247    4814 out.go:177] 
	W0809 11:29:17.452418    4814 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:17.452463    4814 out.go:239] * 
	* 
	W0809 11:29:17.455168    4814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:17.467225    4814 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (50.890291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-905000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-905000 --alsologtostderr -v=1: exit status 89 (46.20075ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-905000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:07.706291    4816 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:07.706422    4816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.706425    4816 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:07.706427    4816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:07.706541    4816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:07.706746    4816 out.go:303] Setting JSON to false
	I0809 11:29:07.706760    4816 mustload.go:65] Loading cluster: no-preload-905000
	I0809 11:29:07.706924    4816 config.go:182] Loaded profile config "no-preload-905000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.0
	I0809 11:29:07.709877    4816 out.go:177] * The control plane node must be running for this command
	I0809 11:29:07.717841    4816 out.go:177]   To start a cluster, run: "minikube start -p no-preload-905000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-905000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (35.666917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (33.504333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-905000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (11.334991708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-708000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-708000 in cluster default-k8s-diff-port-708000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:08.441673    4860 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:08.441801    4860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:08.441804    4860 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:08.441806    4860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:08.441923    4860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:08.442943    4860 out.go:303] Setting JSON to false
	I0809 11:29:08.458101    4860 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1722,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:08.458175    4860 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:08.462750    4860 out.go:177] * [default-k8s-diff-port-708000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:08.468642    4860 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:08.468690    4860 notify.go:220] Checking for updates...
	I0809 11:29:08.472711    4860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:08.475717    4860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:08.478676    4860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:08.481716    4860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:08.484798    4860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:08.487994    4860 config.go:182] Loaded profile config "embed-certs-470000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:08.488054    4860 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:08.488097    4860 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:08.492695    4860 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:29:08.499666    4860 start.go:298] selected driver: qemu2
	I0809 11:29:08.499675    4860 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:29:08.499681    4860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:08.501587    4860 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:29:08.504695    4860 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:29:08.507773    4860 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:08.507794    4860 cni.go:84] Creating CNI manager for ""
	I0809 11:29:08.507800    4860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:08.507805    4860 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:29:08.507811    4860 start_flags.go:319] config:
	{Name:default-k8s-diff-port-708000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-708000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:08.512124    4860 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:08.519643    4860 out.go:177] * Starting control plane node default-k8s-diff-port-708000 in cluster default-k8s-diff-port-708000
	I0809 11:29:08.523539    4860 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:29:08.523558    4860 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:29:08.523565    4860 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:08.523609    4860 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:08.523613    4860 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:29:08.523666    4860 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/default-k8s-diff-port-708000/config.json ...
	I0809 11:29:08.523677    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/default-k8s-diff-port-708000/config.json: {Name:mkd5d4f0ae053afc59ede7683d682a347b2dbd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:29:08.523898    4860 start.go:365] acquiring machines lock for default-k8s-diff-port-708000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:10.037148    4860 start.go:369] acquired machines lock for "default-k8s-diff-port-708000" in 1.513262458s
	I0809 11:29:10.037342    4860 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-708000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:10.037584    4860 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:10.045181    4860 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:10.093881    4860 start.go:159] libmachine.API.Create for "default-k8s-diff-port-708000" (driver="qemu2")
	I0809 11:29:10.093932    4860 client.go:168] LocalClient.Create starting
	I0809 11:29:10.094071    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:10.094120    4860 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:10.094135    4860 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:10.094195    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:10.094231    4860 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:10.094246    4860 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:10.094839    4860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:10.219096    4860 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:10.266989    4860 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:10.266997    4860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:10.267143    4860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:10.275772    4860 main.go:141] libmachine: STDOUT: 
	I0809 11:29:10.275786    4860 main.go:141] libmachine: STDERR: 
	I0809 11:29:10.275829    4860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2 +20000M
	I0809 11:29:10.282961    4860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:10.282975    4860 main.go:141] libmachine: STDERR: 
	I0809 11:29:10.282988    4860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:10.283001    4860 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:10.283035    4860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:5f:03:12:27:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:10.284519    4860 main.go:141] libmachine: STDOUT: 
	I0809 11:29:10.284532    4860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:10.284549    4860 client.go:171] LocalClient.Create took 190.615208ms
	I0809 11:29:12.286650    4860 start.go:128] duration metric: createHost completed in 2.249098s
	I0809 11:29:12.286715    4860 start.go:83] releasing machines lock for "default-k8s-diff-port-708000", held for 2.249567084s
	W0809 11:29:12.286804    4860 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:12.300060    4860 out.go:177] * Deleting "default-k8s-diff-port-708000" in qemu2 ...
	W0809 11:29:12.323090    4860 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:12.323121    4860 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:17.325284    4860 start.go:365] acquiring machines lock for default-k8s-diff-port-708000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:17.433512    4860 start.go:369] acquired machines lock for "default-k8s-diff-port-708000" in 108.134459ms
	I0809 11:29:17.433681    4860 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-708000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:17.433928    4860 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:17.444346    4860 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:17.492639    4860 start.go:159] libmachine.API.Create for "default-k8s-diff-port-708000" (driver="qemu2")
	I0809 11:29:17.492701    4860 client.go:168] LocalClient.Create starting
	I0809 11:29:17.492797    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:17.492858    4860 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:17.492886    4860 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:17.492967    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:17.492999    4860 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:17.493015    4860 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:17.493526    4860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:17.623371    4860 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:17.686742    4860 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:17.686757    4860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:17.686938    4860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:17.696083    4860 main.go:141] libmachine: STDOUT: 
	I0809 11:29:17.696104    4860 main.go:141] libmachine: STDERR: 
	I0809 11:29:17.696173    4860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2 +20000M
	I0809 11:29:17.704352    4860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:17.704376    4860 main.go:141] libmachine: STDERR: 
	I0809 11:29:17.704398    4860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:17.704407    4860 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:17.704453    4860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:c6:0b:52:fd:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:17.706183    4860 main.go:141] libmachine: STDOUT: 
	I0809 11:29:17.706196    4860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:17.706210    4860 client.go:171] LocalClient.Create took 213.510125ms
	I0809 11:29:19.707720    4860 start.go:128] duration metric: createHost completed in 2.273813s
	I0809 11:29:19.707816    4860 start.go:83] releasing machines lock for "default-k8s-diff-port-708000", held for 2.274335292s
	W0809 11:29:19.708228    4860 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:19.721855    4860 out.go:177] 
	W0809 11:29:19.727383    4860 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:19.727415    4860 out.go:239] * 
	* 
	W0809 11:29:19.730160    4860 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:19.737804    4860 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (64.890292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-470000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-470000 create -f testdata/busybox.yaml: exit status 1 (30.266625ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-470000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (32.47475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (33.036583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-470000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system: exit status 1 (26.42975ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-470000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (27.595292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (6.924480209s)

                                                
                                                
-- stdout --
	* [embed-certs-470000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-470000 in cluster embed-certs-470000
	* Restarting existing qemu2 VM for "embed-certs-470000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-470000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:17.902259    4901 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:17.902372    4901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:17.902375    4901 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:17.902377    4901 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:17.902487    4901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:17.903457    4901 out.go:303] Setting JSON to false
	I0809 11:29:17.918336    4901 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1731,"bootTime":1691604026,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:17.918401    4901 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:17.923320    4901 out.go:177] * [embed-certs-470000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:17.928311    4901 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:17.928368    4901 notify.go:220] Checking for updates...
	I0809 11:29:17.932327    4901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:17.936273    4901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:17.940255    4901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:17.943302    4901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:17.946221    4901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:17.949525    4901 config.go:182] Loaded profile config "embed-certs-470000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:17.949756    4901 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:17.953268    4901 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:29:17.960265    4901 start.go:298] selected driver: qemu2
	I0809 11:29:17.960271    4901 start.go:901] validating driver "qemu2" against &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:17.960334    4901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:17.962340    4901 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:17.962366    4901 cni.go:84] Creating CNI manager for ""
	I0809 11:29:17.962375    4901 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:17.962383    4901 start_flags.go:319] config:
	{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:embed-certs-470000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:17.966340    4901 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:17.970300    4901 out.go:177] * Starting control plane node embed-certs-470000 in cluster embed-certs-470000
	I0809 11:29:17.978282    4901 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:29:17.978303    4901 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:29:17.978311    4901 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:17.978374    4901 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:17.978380    4901 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:29:17.978447    4901 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/embed-certs-470000/config.json ...
	I0809 11:29:17.978779    4901 start.go:365] acquiring machines lock for embed-certs-470000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:19.708119    4901 start.go:369] acquired machines lock for "embed-certs-470000" in 1.729220209s
	I0809 11:29:19.708343    4901 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:19.708370    4901 fix.go:54] fixHost starting: 
	I0809 11:29:19.709060    4901 fix.go:102] recreateIfNeeded on embed-certs-470000: state=Stopped err=<nil>
	W0809 11:29:19.709105    4901 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:19.725841    4901 out.go:177] * Restarting existing qemu2 VM for "embed-certs-470000" ...
	I0809 11:29:19.730973    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4e:6e:7e:96:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:19.740019    4901 main.go:141] libmachine: STDOUT: 
	I0809 11:29:19.740077    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:19.740187    4901 fix.go:56] fixHost completed within 31.818958ms
	I0809 11:29:19.740207    4901 start.go:83] releasing machines lock for "embed-certs-470000", held for 32.052209ms
	W0809 11:29:19.740245    4901 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:19.740378    4901 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:19.740394    4901 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:24.742518    4901 start.go:365] acquiring machines lock for embed-certs-470000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:24.743106    4901 start.go:369] acquired machines lock for "embed-certs-470000" in 376.833µs
	I0809 11:29:24.743277    4901 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:24.743296    4901 fix.go:54] fixHost starting: 
	I0809 11:29:24.744109    4901 fix.go:102] recreateIfNeeded on embed-certs-470000: state=Stopped err=<nil>
	W0809 11:29:24.744139    4901 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:24.753566    4901 out.go:177] * Restarting existing qemu2 VM for "embed-certs-470000" ...
	I0809 11:29:24.756802    4901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4e:6e:7e:96:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/embed-certs-470000/disk.qcow2
	I0809 11:29:24.765286    4901 main.go:141] libmachine: STDOUT: 
	I0809 11:29:24.765333    4901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:24.765418    4901 fix.go:56] fixHost completed within 22.122708ms
	I0809 11:29:24.765441    4901 start.go:83] releasing machines lock for "embed-certs-470000", held for 22.274708ms
	W0809 11:29:24.765624    4901 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:24.772547    4901 out.go:177] 
	W0809 11:29:24.776686    4901 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:24.776725    4901 out.go:239] * 
	* 
	W0809 11:29:24.779189    4901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:24.787547    4901 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (65.71225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-708000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-708000 create -f testdata/busybox.yaml: exit status 1 (29.129459ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-708000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (27.880708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (27.598875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-708000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-708000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-708000 describe deploy/metrics-server -n kube-system: exit status 1 (25.07325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-708000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-708000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (27.91075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4: exit status 80 (5.164036041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-708000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-708000 in cluster default-k8s-diff-port-708000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:20.188867    4928 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:20.188976    4928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:20.188979    4928 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:20.188982    4928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:20.189086    4928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:20.190008    4928 out.go:303] Setting JSON to false
	I0809 11:29:20.204969    4928 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1734,"bootTime":1691604026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:20.205048    4928 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:20.209535    4928 out.go:177] * [default-k8s-diff-port-708000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:20.216666    4928 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:20.216733    4928 notify.go:220] Checking for updates...
	I0809 11:29:20.219568    4928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:20.223623    4928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:20.227664    4928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:20.230651    4928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:20.233677    4928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:20.236965    4928 config.go:182] Loaded profile config "default-k8s-diff-port-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:20.237221    4928 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:20.240633    4928 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:29:20.247692    4928 start.go:298] selected driver: qemu2
	I0809 11:29:20.247697    4928 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-708000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:20.247752    4928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:20.249636    4928 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 11:29:20.249659    4928 cni.go:84] Creating CNI manager for ""
	I0809 11:29:20.249665    4928 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:20.249670    4928 start_flags.go:319] config:
	{Name:default-k8s-diff-port-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:default-k8s-diff-port-7080
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:20.253463    4928 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:20.261659    4928 out.go:177] * Starting control plane node default-k8s-diff-port-708000 in cluster default-k8s-diff-port-708000
	I0809 11:29:20.265487    4928 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:29:20.265512    4928 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:29:20.265520    4928 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:20.265565    4928 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:20.265570    4928 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:29:20.265621    4928 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/default-k8s-diff-port-708000/config.json ...
	I0809 11:29:20.265950    4928 start.go:365] acquiring machines lock for default-k8s-diff-port-708000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:20.265977    4928 start.go:369] acquired machines lock for "default-k8s-diff-port-708000" in 21µs
	I0809 11:29:20.265987    4928 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:20.266009    4928 fix.go:54] fixHost starting: 
	I0809 11:29:20.266126    4928 fix.go:102] recreateIfNeeded on default-k8s-diff-port-708000: state=Stopped err=<nil>
	W0809 11:29:20.266134    4928 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:20.270693    4928 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-708000" ...
	I0809 11:29:20.277678    4928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:c6:0b:52:fd:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:20.279672    4928 main.go:141] libmachine: STDOUT: 
	I0809 11:29:20.279695    4928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:20.279727    4928 fix.go:56] fixHost completed within 13.734667ms
	I0809 11:29:20.279766    4928 start.go:83] releasing machines lock for "default-k8s-diff-port-708000", held for 13.784292ms
	W0809 11:29:20.279774    4928 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:20.279814    4928 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:20.279819    4928 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:25.281809    4928 start.go:365] acquiring machines lock for default-k8s-diff-port-708000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:25.281917    4928 start.go:369] acquired machines lock for "default-k8s-diff-port-708000" in 82.625µs
	I0809 11:29:25.281954    4928 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:25.281958    4928 fix.go:54] fixHost starting: 
	I0809 11:29:25.282110    4928 fix.go:102] recreateIfNeeded on default-k8s-diff-port-708000: state=Stopped err=<nil>
	W0809 11:29:25.282115    4928 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:25.286605    4928 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-708000" ...
	I0809 11:29:25.293675    4928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:c6:0b:52:fd:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/default-k8s-diff-port-708000/disk.qcow2
	I0809 11:29:25.295461    4928 main.go:141] libmachine: STDOUT: 
	I0809 11:29:25.295479    4928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:25.295504    4928 fix.go:56] fixHost completed within 13.5465ms
	I0809 11:29:25.295511    4928 start.go:83] releasing machines lock for "default-k8s-diff-port-708000", held for 13.589583ms
	W0809 11:29:25.295589    4928 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:25.303640    4928 out.go:177] 
	W0809 11:29:25.306621    4928 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:25.306627    4928 out.go:239] * 
	* 
	W0809 11:29:25.307122    4928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:25.318661    4928 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-708000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (29.8175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-470000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (31.320709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-470000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.129084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-470000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (27.618709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-470000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-470000 "sudo crictl images -o json": exit status 89 (36.367917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-470000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-470000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-470000"
start_stop_delete_test.go:304: v1.27.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.4",
- 	"registry.k8s.io/kube-controller-manager:v1.27.4",
- 	"registry.k8s.io/kube-proxy:v1.27.4",
- 	"registry.k8s.io/kube-scheduler:v1.27.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (27.364333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1: exit status 89 (37.203208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-470000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:25.043603    4947 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:25.043743    4947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.043746    4947 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:25.043748    4947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.043874    4947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:25.044079    4947 out.go:303] Setting JSON to false
	I0809 11:29:25.044088    4947 mustload.go:65] Loading cluster: embed-certs-470000
	I0809 11:29:25.044270    4947 config.go:182] Loaded profile config "embed-certs-470000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:25.045957    4947 out.go:177] * The control plane node must be running for this command
	I0809 11:29:25.049797    4947 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-470000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (27.098458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (27.431334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-708000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (28.711042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-708000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.60825ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-708000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (28.949208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-708000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-708000 "sudo crictl images -o json": exit status 89 (40.94525ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-708000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-708000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-708000"
start_stop_delete_test.go:304: v1.27.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.4",
- 	"registry.k8s.io/kube-controller-manager:v1.27.4",
- 	"registry.k8s.io/kube-proxy:v1.27.4",
- 	"registry.k8s.io/kube-scheduler:v1.27.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (28.027708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0: exit status 80 (10.101497792s)

                                                
                                                
-- stdout --
	* [newest-cni-644000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-644000 in cluster newest-cni-644000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:25.533343    4982 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:25.533470    4982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.533478    4982 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:25.533480    4982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.533614    4982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:25.534811    4982 out.go:303] Setting JSON to false
	I0809 11:29:25.550909    4982 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1739,"bootTime":1691604026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:25.550959    4982 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:25.559632    4982 out.go:177] * [newest-cni-644000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:25.563582    4982 notify.go:220] Checking for updates...
	I0809 11:29:25.567632    4982 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:25.570538    4982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:25.574608    4982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:25.577660    4982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:25.586691    4982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:25.594596    4982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:25.598912    4982 config.go:182] Loaded profile config "default-k8s-diff-port-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:25.598973    4982 config.go:182] Loaded profile config "multinode-305000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:25.599014    4982 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:25.602627    4982 out.go:177] * Using the qemu2 driver based on user configuration
	I0809 11:29:25.609587    4982 start.go:298] selected driver: qemu2
	I0809 11:29:25.609594    4982 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:29:25.609600    4982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:25.611332    4982 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0809 11:29:25.611351    4982 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0809 11:29:25.614654    4982 out.go:177] * Automatically selected the socket_vmnet network
	I0809 11:29:25.621702    4982 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0809 11:29:25.621724    4982 cni.go:84] Creating CNI manager for ""
	I0809 11:29:25.621731    4982 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:25.621735    4982 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0809 11:29:25.621742    4982 start_flags.go:319] config:
	{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:newest-cni-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:25.625959    4982 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:25.632607    4982 out.go:177] * Starting control plane node newest-cni-644000 in cluster newest-cni-644000
	I0809 11:29:25.636637    4982 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:29:25.636661    4982 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0809 11:29:25.636668    4982 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:25.636742    4982 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:25.636749    4982 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on docker
	I0809 11:29:25.636812    4982 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/newest-cni-644000/config.json ...
	I0809 11:29:25.636824    4982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/newest-cni-644000/config.json: {Name:mk8edd6b4ad640f0fb46f6c1bd94c0d3383501e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:29:25.637028    4982 start.go:365] acquiring machines lock for newest-cni-644000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:25.637051    4982 start.go:369] acquired machines lock for "newest-cni-644000" in 17.708µs
	I0809 11:29:25.637060    4982 start.go:93] Provisioning new machine with config: &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:newest-cni-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:25.637093    4982 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:25.645705    4982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:25.660001    4982 start.go:159] libmachine.API.Create for "newest-cni-644000" (driver="qemu2")
	I0809 11:29:25.660033    4982 client.go:168] LocalClient.Create starting
	I0809 11:29:25.660121    4982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:25.660147    4982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:25.660155    4982 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:25.660192    4982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:25.660211    4982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:25.660219    4982 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:25.660547    4982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:25.807243    4982 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:26.159885    4982 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:26.159899    4982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:26.160072    4982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:26.169049    4982 main.go:141] libmachine: STDOUT: 
	I0809 11:29:26.169076    4982 main.go:141] libmachine: STDERR: 
	I0809 11:29:26.169147    4982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2 +20000M
	I0809 11:29:26.176412    4982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:26.176425    4982 main.go:141] libmachine: STDERR: 
	I0809 11:29:26.176445    4982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:26.176454    4982 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:26.176496    4982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c3:9c:6e:e0:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:26.177971    4982 main.go:141] libmachine: STDOUT: 
	I0809 11:29:26.177982    4982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:26.177999    4982 client.go:171] LocalClient.Create took 517.9735ms
	I0809 11:29:28.180191    4982 start.go:128] duration metric: createHost completed in 2.543128208s
	I0809 11:29:28.180281    4982 start.go:83] releasing machines lock for "newest-cni-644000", held for 2.543284125s
	W0809 11:29:28.180390    4982 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:28.185816    4982 out.go:177] * Deleting "newest-cni-644000" in qemu2 ...
	W0809 11:29:28.210589    4982 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:28.210620    4982 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:33.212721    4982 start.go:365] acquiring machines lock for newest-cni-644000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:33.213293    4982 start.go:369] acquired machines lock for "newest-cni-644000" in 445.25µs
	I0809 11:29:33.213435    4982 start.go:93] Provisioning new machine with config: &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:newest-cni-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0809 11:29:33.213757    4982 start.go:125] createHost starting for "" (driver="qemu2")
	I0809 11:29:33.220476    4982 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0809 11:29:33.268354    4982 start.go:159] libmachine.API.Create for "newest-cni-644000" (driver="qemu2")
	I0809 11:29:33.268396    4982 client.go:168] LocalClient.Create starting
	I0809 11:29:33.268527    4982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/ca.pem
	I0809 11:29:33.268585    4982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:33.268611    4982 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:33.268681    4982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17011-995/.minikube/certs/cert.pem
	I0809 11:29:33.268716    4982 main.go:141] libmachine: Decoding PEM data...
	I0809 11:29:33.268733    4982 main.go:141] libmachine: Parsing certificate...
	I0809 11:29:33.269233    4982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17011-995/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso...
	I0809 11:29:33.396558    4982 main.go:141] libmachine: Creating SSH key...
	I0809 11:29:33.549769    4982 main.go:141] libmachine: Creating Disk image...
	I0809 11:29:33.549778    4982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0809 11:29:33.549944    4982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:33.558793    4982 main.go:141] libmachine: STDOUT: 
	I0809 11:29:33.558820    4982 main.go:141] libmachine: STDERR: 
	I0809 11:29:33.558878    4982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2 +20000M
	I0809 11:29:33.566013    4982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0809 11:29:33.566025    4982 main.go:141] libmachine: STDERR: 
	I0809 11:29:33.566046    4982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:33.566054    4982 main.go:141] libmachine: Starting QEMU VM...
	I0809 11:29:33.566093    4982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f7:85:df:b1:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:33.567631    4982 main.go:141] libmachine: STDOUT: 
	I0809 11:29:33.567642    4982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:33.567656    4982 client.go:171] LocalClient.Create took 299.262875ms
	I0809 11:29:35.567897    4982 start.go:128] duration metric: createHost completed in 2.354176083s
	I0809 11:29:35.567948    4982 start.go:83] releasing machines lock for "newest-cni-644000", held for 2.354684417s
	W0809 11:29:35.568359    4982 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:35.576563    4982 out.go:177] 
	W0809 11:29:35.581487    4982 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:35.581509    4982 out.go:239] * 
	* 
	W0809 11:29:35.584054    4982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:35.593511    4982 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (66.76425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-708000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-708000 --alsologtostderr -v=1: exit status 89 (41.333875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-708000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:25.540385    4983 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:25.540506    4983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.540509    4983 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:25.540511    4983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:25.540618    4983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:25.540799    4983 out.go:303] Setting JSON to false
	I0809 11:29:25.540811    4983 mustload.go:65] Loading cluster: default-k8s-diff-port-708000
	I0809 11:29:25.540977    4983 config.go:182] Loaded profile config "default-k8s-diff-port-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:29:25.545587    4983 out.go:177] * The control plane node must be running for this command
	I0809 11:29:25.549735    4983 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-708000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-708000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (32.265625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (31.065792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0: exit status 80 (5.17936875s)

                                                
                                                
-- stdout --
	* [newest-cni-644000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-644000 in cluster newest-cni-644000
	* Restarting existing qemu2 VM for "newest-cni-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-644000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:35.912212    5031 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:35.912342    5031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:35.912345    5031 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:35.912347    5031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:35.912458    5031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:35.913395    5031 out.go:303] Setting JSON to false
	I0809 11:29:35.929476    5031 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1749,"bootTime":1691604026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:29:35.929593    5031 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:29:35.933342    5031 out.go:177] * [newest-cni-644000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:29:35.940206    5031 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:29:35.940256    5031 notify.go:220] Checking for updates...
	I0809 11:29:35.947108    5031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:29:35.950158    5031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:29:35.953210    5031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:29:35.954532    5031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:29:35.957173    5031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:29:35.960452    5031 config.go:182] Loaded profile config "newest-cni-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.0
	I0809 11:29:35.960700    5031 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:29:35.965006    5031 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:29:35.972208    5031 start.go:298] selected driver: qemu2
	I0809 11:29:35.972215    5031 start.go:901] validating driver "qemu2" against &{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:newest-cni-644000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:35.972295    5031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:29:35.974292    5031 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0809 11:29:35.974315    5031 cni.go:84] Creating CNI manager for ""
	I0809 11:29:35.974321    5031 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:29:35.974325    5031 start_flags.go:319] config:
	{Name:newest-cni-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:newest-cni-644000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:29:35.978314    5031 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:29:35.985172    5031 out.go:177] * Starting control plane node newest-cni-644000 in cluster newest-cni-644000
	I0809 11:29:35.989215    5031 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:29:35.989233    5031 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0809 11:29:35.989249    5031 cache.go:57] Caching tarball of preloaded images
	I0809 11:29:35.989306    5031 preload.go:174] Found /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0809 11:29:35.989312    5031 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on docker
	I0809 11:29:35.989376    5031 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/newest-cni-644000/config.json ...
	I0809 11:29:35.989729    5031 start.go:365] acquiring machines lock for newest-cni-644000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:35.989754    5031 start.go:369] acquired machines lock for "newest-cni-644000" in 19.083µs
	I0809 11:29:35.989763    5031 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:35.989767    5031 fix.go:54] fixHost starting: 
	I0809 11:29:35.989877    5031 fix.go:102] recreateIfNeeded on newest-cni-644000: state=Stopped err=<nil>
	W0809 11:29:35.989885    5031 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:35.994197    5031 out.go:177] * Restarting existing qemu2 VM for "newest-cni-644000" ...
	I0809 11:29:36.002141    5031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f7:85:df:b1:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:36.003951    5031 main.go:141] libmachine: STDOUT: 
	I0809 11:29:36.003969    5031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:36.003999    5031 fix.go:56] fixHost completed within 14.232334ms
	I0809 11:29:36.004004    5031 start.go:83] releasing machines lock for "newest-cni-644000", held for 14.24675ms
	W0809 11:29:36.004011    5031 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:36.004039    5031 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:36.004044    5031 start.go:687] Will try again in 5 seconds ...
	I0809 11:29:41.006112    5031 start.go:365] acquiring machines lock for newest-cni-644000: {Name:mk8e7245a2916cb0ab3d957fdcdeb632388214d9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0809 11:29:41.006665    5031 start.go:369] acquired machines lock for "newest-cni-644000" in 463.291µs
	I0809 11:29:41.006819    5031 start.go:96] Skipping create...Using existing machine configuration
	I0809 11:29:41.006839    5031 fix.go:54] fixHost starting: 
	I0809 11:29:41.007529    5031 fix.go:102] recreateIfNeeded on newest-cni-644000: state=Stopped err=<nil>
	W0809 11:29:41.007557    5031 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 11:29:41.015985    5031 out.go:177] * Restarting existing qemu2 VM for "newest-cni-644000" ...
	I0809 11:29:41.021366    5031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f7:85:df:b1:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17011-995/.minikube/machines/newest-cni-644000/disk.qcow2
	I0809 11:29:41.030426    5031 main.go:141] libmachine: STDOUT: 
	I0809 11:29:41.030479    5031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0809 11:29:41.030549    5031 fix.go:56] fixHost completed within 23.711458ms
	I0809 11:29:41.030569    5031 start.go:83] releasing machines lock for "newest-cni-644000", held for 23.88275ms
	W0809 11:29:41.030763    5031 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-644000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0809 11:29:41.038028    5031 out.go:177] 
	W0809 11:29:41.041262    5031 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0809 11:29:41.041307    5031 out.go:239] * 
	* 
	W0809 11:29:41.044060    5031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:29:41.051843    5031 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-644000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (67.868625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-644000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-644000 "sudo crictl images -o json": exit status 89 (44.293291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-644000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-644000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-644000"
start_stop_delete_test.go:304: v1.28.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.28.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.28.0-rc.0",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (27.951875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1: exit status 89 (40.570334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-644000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:29:41.234007    5052 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:29:41.234169    5052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:41.234171    5052 out.go:309] Setting ErrFile to fd 2...
	I0809 11:29:41.234173    5052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:29:41.234287    5052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:29:41.234489    5052 out.go:303] Setting JSON to false
	I0809 11:29:41.234501    5052 mustload.go:65] Loading cluster: newest-cni-644000
	I0809 11:29:41.234677    5052 config.go:182] Loaded profile config "newest-cni-644000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.0-rc.0
	I0809 11:29:41.238964    5052 out.go:177] * The control plane node must be running for this command
	I0809 11:29:41.243018    5052 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-644000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-644000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (28.315042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (28.110666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (139/250)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.27.4/json-events 10.64
11 TestDownloadOnly/v1.27.4/preload-exists 0
14 TestDownloadOnly/v1.27.4/kubectl 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.07
17 TestDownloadOnly/v1.28.0-rc.0/json-events 19.38
18 TestDownloadOnly/v1.28.0-rc.0/preload-exists 0
21 TestDownloadOnly/v1.28.0-rc.0/kubectl 0
22 TestDownloadOnly/v1.28.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.27
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
26 TestBinaryMirror 0.34
37 TestHyperKitDriverInstallOrUpdate 8.29
40 TestErrorSpam/setup 30.14
41 TestErrorSpam/start 0.35
42 TestErrorSpam/status 0.27
43 TestErrorSpam/pause 0.68
44 TestErrorSpam/unpause 0.63
45 TestErrorSpam/stop 3.23
48 TestFunctional/serial/CopySyncFile 0
49 TestFunctional/serial/StartWithProxy 82.97
50 TestFunctional/serial/AuditLog 0
51 TestFunctional/serial/SoftStart 34.81
52 TestFunctional/serial/KubeContext 0.03
53 TestFunctional/serial/KubectlGetPods 0.05
56 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
57 TestFunctional/serial/CacheCmd/cache/add_local 1.27
58 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
59 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
61 TestFunctional/serial/CacheCmd/cache/cache_reload 0.93
62 TestFunctional/serial/CacheCmd/cache/delete 0.07
63 TestFunctional/serial/MinikubeKubectlCmd 0.42
64 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
65 TestFunctional/serial/ExtraConfig 37.12
66 TestFunctional/serial/ComponentHealth 0.04
67 TestFunctional/serial/LogsCmd 0.64
68 TestFunctional/serial/LogsFileCmd 0.63
69 TestFunctional/serial/InvalidService 4.06
71 TestFunctional/parallel/ConfigCmd 0.21
72 TestFunctional/parallel/DashboardCmd 7.24
73 TestFunctional/parallel/DryRun 0.22
74 TestFunctional/parallel/InternationalLanguage 0.11
75 TestFunctional/parallel/StatusCmd 0.28
80 TestFunctional/parallel/AddonsCmd 0.17
81 TestFunctional/parallel/PersistentVolumeClaim 24.13
83 TestFunctional/parallel/SSHCmd 0.13
84 TestFunctional/parallel/CpCmd 0.28
86 TestFunctional/parallel/FileSync 0.07
87 TestFunctional/parallel/CertSync 0.45
91 TestFunctional/parallel/NodeLabels 0.04
93 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
95 TestFunctional/parallel/License 0.23
97 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
98 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
100 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
102 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
104 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
105 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
106 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
107 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
108 TestFunctional/parallel/ServiceCmd/List 0.31
109 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
110 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
111 TestFunctional/parallel/ServiceCmd/Format 0.1
112 TestFunctional/parallel/ServiceCmd/URL 0.1
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
114 TestFunctional/parallel/ProfileCmd/profile_list 0.15
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
117 TestFunctional/parallel/MountCmd/specific-port 0.93
119 TestFunctional/parallel/Version/short 0.03
120 TestFunctional/parallel/Version/components 0.17
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
125 TestFunctional/parallel/ImageCommands/ImageBuild 1.77
126 TestFunctional/parallel/ImageCommands/Setup 1.47
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.18
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.6
129 TestFunctional/parallel/DockerEnv/bash 0.4
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.41
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
138 TestFunctional/delete_addon-resizer_images 0.12
139 TestFunctional/delete_my-image_image 0.04
140 TestFunctional/delete_minikube_cached_images 0.04
144 TestImageBuild/serial/Setup 29.63
145 TestImageBuild/serial/NormalBuild 1.01
147 TestImageBuild/serial/BuildWithDockerIgnore 0.12
148 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
151 TestIngressAddonLegacy/StartLegacyK8sCluster 64.15
153 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.89
154 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.27
158 TestJSONOutput/start/Command 45.45
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.29
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.23
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 12.08
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.32
186 TestMainNoArgs 0.03
187 TestMinikubeProfile 63.34
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
248 TestNoKubernetes/serial/ProfileList 0.14
249 TestNoKubernetes/serial/Stop 0.06
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
269 TestStartStop/group/old-k8s-version/serial/Stop 0.06
270 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
274 TestStartStop/group/no-preload/serial/Stop 0.06
275 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
291 TestStartStop/group/embed-certs/serial/Stop 0.06
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
311 TestStartStop/group/newest-cni/serial/Stop 0.06
312 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-498000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-498000: exit status 85 (95.028417ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:08 PDT |          |
	|         | -p download-only-498000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:08:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:08:50.930648    1413 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:08:50.930770    1413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:08:50.930773    1413 out.go:309] Setting ErrFile to fd 2...
	I0809 11:08:50.930776    1413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:08:50.930881    1413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	W0809 11:08:50.930935    1413 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: no such file or directory
	I0809 11:08:50.932056    1413 out.go:303] Setting JSON to true
	I0809 11:08:50.948374    1413 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":504,"bootTime":1691604026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:08:50.948428    1413 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:08:50.957136    1413 out.go:97] [download-only-498000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:08:50.960943    1413 out.go:169] MINIKUBE_LOCATION=17011
	I0809 11:08:50.957298    1413 notify.go:220] Checking for updates...
	W0809 11:08:50.957308    1413 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball: no such file or directory
	I0809 11:08:50.970973    1413 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:08:50.974032    1413 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:08:50.977033    1413 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:08:50.979945    1413 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	W0809 11:08:50.986027    1413 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 11:08:50.986249    1413 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:08:50.990905    1413 out.go:97] Using the qemu2 driver based on user configuration
	I0809 11:08:50.990912    1413 start.go:298] selected driver: qemu2
	I0809 11:08:50.990914    1413 start.go:901] validating driver "qemu2" against <nil>
	I0809 11:08:50.990974    1413 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 11:08:50.994999    1413 out.go:169] Automatically selected the socket_vmnet network
	I0809 11:08:51.001445    1413 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0809 11:08:51.001523    1413 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 11:08:51.001574    1413 cni.go:84] Creating CNI manager for ""
	I0809 11:08:51.001592    1413 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0809 11:08:51.001596    1413 start_flags.go:319] config:
	{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-498000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:08:51.007207    1413 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:08:51.011018    1413 out.go:97] Downloading VM boot image ...
	I0809 11:08:51.011035    1413 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/iso/arm64/minikube-v1.31.0-1690838458-16971-arm64.iso
	I0809 11:08:56.279215    1413 out.go:97] Starting control plane node download-only-498000 in cluster download-only-498000
	I0809 11:08:56.279249    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:08:56.338650    1413 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:08:56.338669    1413 cache.go:57] Caching tarball of preloaded images
	I0809 11:08:56.338831    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:08:56.342847    1413 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0809 11:08:56.342853    1413 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:08:56.422163    1413 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0809 11:09:01.832206    1413 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:01.832347    1413 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:02.471345    1413 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0809 11:09:02.471536    1413 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json ...
	I0809 11:09:02.471555    1413 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json: {Name:mk2bde276129fa60a0acedd1cd1f332b26f05753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 11:09:02.471783    1413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0809 11:09:02.471953    1413 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0809 11:09:02.948281    1413 out.go:169] 
	W0809 11:09:02.952455    1413 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710 0x107c04710] Decompressors:map[bz2:0x14000701a80 gz:0x14000701a88 tar:0x14000701a30 tar.bz2:0x14000701a40 tar.gz:0x14000701a50 tar.xz:0x14000701a60 tar.zst:0x14000701a70 tbz2:0x14000701a40 tgz:0x14000701a50 txz:0x14000701a60 tzst:0x14000701a70 xz:0x14000701a90 zip:0x14000701aa0 zst:0x14000701a98] Getters:map[file:0x14000e9c5f0 http:0x140005fc190 https:0x140005fc1e0] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0809 11:09:02.952482    1413 out_reason.go:110] 
	W0809 11:09:02.959488    1413 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 11:09:02.963359    1413 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (10.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=docker --driver=qemu2 : (10.638204292s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (10.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
--- PASS: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-498000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-498000: exit status 85 (74.365542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:08 PDT |          |
	|         | -p download-only-498000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:09 PDT |          |
	|         | -p download-only-498000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:09:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:09:03.151775    1423 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:09:03.151892    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:03.151897    1423 out.go:309] Setting ErrFile to fd 2...
	I0809 11:09:03.151900    1423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:03.152016    1423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	W0809 11:09:03.152073    1423 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: no such file or directory
	I0809 11:09:03.152973    1423 out.go:303] Setting JSON to true
	I0809 11:09:03.167809    1423 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":517,"bootTime":1691604026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:09:03.167871    1423 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:09:03.172620    1423 out.go:97] [download-only-498000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:09:03.176405    1423 out.go:169] MINIKUBE_LOCATION=17011
	I0809 11:09:03.172747    1423 notify.go:220] Checking for updates...
	I0809 11:09:03.182570    1423 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:09:03.183967    1423 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:09:03.186610    1423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:09:03.189560    1423 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	W0809 11:09:03.195546    1423 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 11:09:03.195845    1423 config.go:182] Loaded profile config "download-only-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0809 11:09:03.195867    1423 start.go:809] api.Load failed for download-only-498000: filestore "download-only-498000": Docker machine "download-only-498000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 11:09:03.195910    1423 driver.go:373] Setting default libvirt URI to qemu:///system
	W0809 11:09:03.195922    1423 start.go:809] api.Load failed for download-only-498000: filestore "download-only-498000": Docker machine "download-only-498000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 11:09:03.198581    1423 out.go:97] Using the qemu2 driver based on existing profile
	I0809 11:09:03.198588    1423 start.go:298] selected driver: qemu2
	I0809 11:09:03.198590    1423 start.go:901] validating driver "qemu2" against &{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-498000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:09:03.200554    1423 cni.go:84] Creating CNI manager for ""
	I0809 11:09:03.200568    1423 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:09:03.200577    1423 start_flags.go:319] config:
	{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-498000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:09:03.204580    1423 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:09:03.207530    1423 out.go:97] Starting control plane node download-only-498000 in cluster download-only-498000
	I0809 11:09:03.207537    1423 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:09:03.262465    1423 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:09:03.262480    1423 cache.go:57] Caching tarball of preloaded images
	I0809 11:09:03.262635    1423 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:09:03.267777    1423 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0809 11:09:03.267784    1423 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:03.344084    1423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4?checksum=md5:883217b4c813700d926caf1a3f55f0b8 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4
	I0809 11:09:09.564382    1423 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:09.564518    1423 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:10.122502    1423 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on docker
	I0809 11:09:10.122573    1423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json ...
	I0809 11:09:10.122817    1423 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime docker
	I0809 11:09:10.122970    1423 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.27.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/json-events (19.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-498000 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=docker --driver=qemu2 : (19.382884084s)
--- PASS: TestDownloadOnly/v1.28.0-rc.0/json-events (19.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-498000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-498000: exit status 85 (73.305916ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:08 PDT |          |
	|         | -p download-only-498000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:09 PDT |          |
	|         | -p download-only-498000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-498000 | jenkins | v1.31.1 | 09 Aug 23 11:09 PDT |          |
	|         | -p download-only-498000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 11:09:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 11:09:13.865862    1431 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:09:13.865988    1431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:13.865991    1431 out.go:309] Setting ErrFile to fd 2...
	I0809 11:09:13.865993    1431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:09:13.866107    1431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	W0809 11:09:13.866161    1431 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17011-995/.minikube/config/config.json: no such file or directory
	I0809 11:09:13.867057    1431 out.go:303] Setting JSON to true
	I0809 11:09:13.882196    1431 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":527,"bootTime":1691604026,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:09:13.882271    1431 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:09:13.887199    1431 out.go:97] [download-only-498000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:09:13.891129    1431 out.go:169] MINIKUBE_LOCATION=17011
	I0809 11:09:13.887309    1431 notify.go:220] Checking for updates...
	I0809 11:09:13.897072    1431 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:09:13.900134    1431 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:09:13.903184    1431 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:09:13.904484    1431 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	W0809 11:09:13.910135    1431 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 11:09:13.910413    1431 config.go:182] Loaded profile config "download-only-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	W0809 11:09:13.910432    1431 start.go:809] api.Load failed for download-only-498000: filestore "download-only-498000": Docker machine "download-only-498000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 11:09:13.910480    1431 driver.go:373] Setting default libvirt URI to qemu:///system
	W0809 11:09:13.910492    1431 start.go:809] api.Load failed for download-only-498000: filestore "download-only-498000": Docker machine "download-only-498000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 11:09:13.913124    1431 out.go:97] Using the qemu2 driver based on existing profile
	I0809 11:09:13.913132    1431 start.go:298] selected driver: qemu2
	I0809 11:09:13.913134    1431 start.go:901] validating driver "qemu2" against &{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-498000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:09:13.914990    1431 cni.go:84] Creating CNI manager for ""
	I0809 11:09:13.915002    1431 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0809 11:09:13.915168    1431 start_flags.go:319] config:
	{Name:download-only-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:download-only-498000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:09:13.919806    1431 iso.go:125] acquiring lock: {Name:mkcf9da2bbc06f7ffafe691590a499fa3fc28d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 11:09:13.923169    1431 out.go:97] Starting control plane node download-only-498000 in cluster download-only-498000
	I0809 11:09:13.923177    1431 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:09:13.986076    1431 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0809 11:09:13.986095    1431 cache.go:57] Caching tarball of preloaded images
	I0809 11:09:13.986247    1431 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:09:13.990495    1431 out.go:97] Downloading Kubernetes v1.28.0-rc.0 preload ...
	I0809 11:09:13.990503    1431 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:14.064902    1431 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:04a144304e4bfcfd407ef003a22c4a23 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0809 11:09:25.682553    1431 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:25.682686    1431 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17011-995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0809 11:09:26.261971    1431 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on docker
	I0809 11:09:26.262038    1431 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/download-only-498000/config.json ...
	I0809 11:09:26.262301    1431 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime docker
	I0809 11:09:26.262452    1431 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17011-995/.minikube/cache/darwin/arm64/v1.28.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-498000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-498000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-667000 --alsologtostderr --binary-mirror http://127.0.0.1:49324 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-667000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-667000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.29s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.29s)

                                                
                                    
x
+
TestErrorSpam/setup (30.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-020000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-020000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 --driver=qemu2 : (30.141074666s)
--- PASS: TestErrorSpam/setup (30.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 status
--- PASS: TestErrorSpam/status (0.27s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 stop: (3.066381167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-020000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-020000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17011-995/.minikube/files/etc/test/nested/copy/1410/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-901000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m22.970189125s)
--- PASS: TestFunctional/serial/StartWithProxy (82.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-901000 --alsologtostderr -v=8: (34.810898792s)
functional_test.go:659: soft start took 34.811346167s for "functional-901000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-901000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:3.1: (1.281824791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:3.3: (1.248716958s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 cache add registry.k8s.io/pause:latest: (1.061752709s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2084138013/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache add minikube-local-cache-test:functional-901000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache delete minikube-local-cache-test:functional-901000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-901000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.180167ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 kubectl -- --context functional-901000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-901000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-901000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.11989525s)
functional_test.go:757: restart took 37.120010292s for "functional-901000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-901000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1862807928/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-901000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-901000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-901000: exit status 115 (147.437709ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31971 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-901000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 config get cpus: exit status 14 (29.186375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 config get cpus: exit status 14 (28.679541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-901000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-901000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2009: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-901000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.69ms)

                                                
                                                
-- stdout --
	* [functional-901000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:14:27.952067    1988 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:14:27.952182    1988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:27.952189    1988 out.go:309] Setting ErrFile to fd 2...
	I0809 11:14:27.952191    1988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:27.952304    1988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:14:27.953318    1988 out.go:303] Setting JSON to false
	I0809 11:14:27.968995    1988 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":841,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:14:27.969097    1988 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:14:27.972403    1988 out.go:177] * [functional-901000] minikube v1.31.1 on Darwin 13.5 (arm64)
	I0809 11:14:27.983308    1988 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:14:27.979273    1988 notify.go:220] Checking for updates...
	I0809 11:14:27.993254    1988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:14:27.994719    1988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:14:27.998269    1988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:14:28.001275    1988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:14:28.004252    1988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:14:28.007476    1988 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:14:28.007710    1988 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:14:28.012275    1988 out.go:177] * Using the qemu2 driver based on existing profile
	I0809 11:14:28.019250    1988 start.go:298] selected driver: qemu2
	I0809 11:14:28.019255    1988 start.go:901] validating driver "qemu2" against &{Name:functional-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:functional-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:14:28.019302    1988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:14:28.025312    1988 out.go:177] 
	W0809 11:14:28.029213    1988 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0809 11:14:28.033211    1988 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-901000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-901000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.089083ms)

                                                
                                                
-- stdout --
	* [functional-901000] minikube v1.31.1 sur Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 11:14:27.840434    1984 out.go:296] Setting OutFile to fd 1 ...
	I0809 11:14:27.840540    1984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:27.840544    1984 out.go:309] Setting ErrFile to fd 2...
	I0809 11:14:27.840546    1984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 11:14:27.840677    1984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
	I0809 11:14:27.842042    1984 out.go:303] Setting JSON to false
	I0809 11:14:27.859110    1984 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":841,"bootTime":1691604026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0809 11:14:27.859184    1984 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0809 11:14:27.863275    1984 out.go:177] * [functional-901000] minikube v1.31.1 sur Darwin 13.5 (arm64)
	I0809 11:14:27.870293    1984 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 11:14:27.870296    1984 notify.go:220] Checking for updates...
	I0809 11:14:27.873256    1984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	I0809 11:14:27.877294    1984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0809 11:14:27.880239    1984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 11:14:27.883285    1984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	I0809 11:14:27.886271    1984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 11:14:27.887886    1984 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
	I0809 11:14:27.888122    1984 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 11:14:27.894224    1984 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0809 11:14:27.904263    1984 start.go:298] selected driver: qemu2
	I0809 11:14:27.904269    1984 start.go:901] validating driver "qemu2" against &{Name:functional-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-1690838458-16971-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.4 ClusterName:functional-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 11:14:27.904340    1984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 11:14:27.910309    1984 out.go:177] 
	W0809 11:14:27.913196    1984 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0809 11:14:27.917271    1984 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2dca2a2b-adb1-41a7-9fe6-7e1b3de8c91e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.019479042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-901000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-901000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-901000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-901000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f63bab89-7e3f-429c-953e-0beb123161ac] Pending
helpers_test.go:344: "sp-pod" [f63bab89-7e3f-429c-953e-0beb123161ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f63bab89-7e3f-429c-953e-0beb123161ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.011060417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-901000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-901000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-901000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b7492e77-6e0d-4f7a-9f38-fd86c7ceae9c] Pending
helpers_test.go:344: "sp-pod" [b7492e77-6e0d-4f7a-9f38-fd86c7ceae9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b7492e77-6e0d-4f7a-9f38-fd86c7ceae9c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.016328333s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-901000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh -n functional-901000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 cp functional-901000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd816122389/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh -n functional-901000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1410/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /etc/test/nested/copy/1410/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1410.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /etc/ssl/certs/1410.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1410.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /usr/share/ca-certificates/1410.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /etc/ssl/certs/14102.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /usr/share/ca-certificates/14102.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-901000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "sudo systemctl is-active crio": exit status 1 (121.199458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1836: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-901000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [eb0a1ca3-c5b5-4e6a-a907-71205011de10] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [eb0a1ca3-c5b5-4e6a-a907-71205011de10] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005168042s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-901000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.173.206 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-901000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-901000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-901000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-r2vqb" [c1ad0be7-b3ff-4b79-bb48-ae2752acf50a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-r2vqb" [c1ad0be7-b3ff-4b79-bb48-ae2752acf50a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.013287958s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service list -o json
functional_test.go:1493: Took "286.218417ms" to run "out/minikube-darwin-arm64 -p functional-901000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31218
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31218
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "115.1725ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.637833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "112.085667ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "33.210792ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1457834126/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.002666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1457834126/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "sudo umount -f /mount-9p": exit status 1 (60.35875ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-901000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1457834126/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-901000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-901000 | 85e7c66e47a0e | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.27.4           | 389f6f052cf83 | 107MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | 7987e0c18af05 | 40.9MB |
| docker.io/library/nginx                     | latest            | ff78c7a65ec2b | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.27.4           | 532e5a30e948f | 66.5MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.27.4           | 64aece92d6bde | 115MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-scheduler              | v1.27.4           | 6eb63895cb67f | 56.2MB |
| gcr.io/google-containers/addon-resizer      | functional-901000 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-901000 image ls --format table --alsologtostderr:
I0809 11:14:46.515500    2185 out.go:296] Setting OutFile to fd 1 ...
I0809 11:14:46.515654    2185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.515659    2185 out.go:309] Setting ErrFile to fd 2...
I0809 11:14:46.515661    2185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.515790    2185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:14:46.516203    2185 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.516271    2185 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.517199    2185 ssh_runner.go:195] Run: systemctl --version
I0809 11:14:46.517209    2185 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
I0809 11:14:46.546532    2185 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-901000 image ls --format json --alsologtostderr:
[{"id":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"115000000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"85e7c66e47a0efecea3fee472c22ce50a5b21b8407c3b8dcc2e3eabffa4c4f7c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-901000"],"size":"30"},{"id":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"66500000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502f
a496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"107000000"},{"id":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"56200000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["r
egistry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"7987e0c18af05e20ea2f672d05e2fe43960553df199d00536b89ea5514c1cf36","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40900000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-901000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-901000 image ls --format json --alsologtostderr:
I0809 11:14:46.441245    2181 out.go:296] Setting OutFile to fd 1 ...
I0809 11:14:46.441384    2181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.441389    2181 out.go:309] Setting ErrFile to fd 2...
I0809 11:14:46.441392    2181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.441523    2181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:14:46.441959    2181 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.442017    2181 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.442774    2181 ssh_runner.go:195] Run: systemctl --version
I0809 11:14:46.442784    2181 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
I0809 11:14:46.471614    2181 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-901000 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 7987e0c18af05e20ea2f672d05e2fe43960553df199d00536b89ea5514c1cf36
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40900000"
- id: ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: 6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "56200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-901000
size: "32900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 85e7c66e47a0efecea3fee472c22ce50a5b21b8407c3b8dcc2e3eabffa4c4f7c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-901000
size: "30"
- id: 64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "115000000"
- id: 389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "107000000"
- id: 532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "66500000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-901000 image ls --format yaml --alsologtostderr:
I0809 11:14:46.368902    2176 out.go:296] Setting OutFile to fd 1 ...
I0809 11:14:46.369045    2176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.369049    2176 out.go:309] Setting ErrFile to fd 2...
I0809 11:14:46.369051    2176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.369179    2176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:14:46.369564    2176 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.369631    2176 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.370542    2176 ssh_runner.go:195] Run: systemctl --version
I0809 11:14:46.370552    2176 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
I0809 11:14:46.397220    2176 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh pgrep buildkitd: exit status 1 (60.709125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image build -t localhost/my-image:functional-901000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 image build -t localhost/my-image:functional-901000 testdata/build --alsologtostderr: (1.638871584s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-901000 image build -t localhost/my-image:functional-901000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 76fa05b94489
Removing intermediate container 76fa05b94489
---> 0acbfc195460
Step 3/3 : ADD content.txt /
---> 0840c7f599ae
Successfully built 0840c7f599ae
Successfully tagged localhost/my-image:functional-901000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-901000 image build -t localhost/my-image:functional-901000 testdata/build --alsologtostderr:
I0809 11:14:46.464503    2183 out.go:296] Setting OutFile to fd 1 ...
I0809 11:14:46.464730    2183 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.464735    2183 out.go:309] Setting ErrFile to fd 2...
I0809 11:14:46.464738    2183 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 11:14:46.464857    2183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17011-995/.minikube/bin
I0809 11:14:46.465273    2183 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.465916    2183 config.go:182] Loaded profile config "functional-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.4
I0809 11:14:46.466835    2183 ssh_runner.go:195] Run: systemctl --version
I0809 11:14:46.466844    2183 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17011-995/.minikube/machines/functional-901000/id_rsa Username:docker}
I0809 11:14:46.494957    2183 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1581054288.tar
I0809 11:14:46.495014    2183 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0809 11:14:46.498657    2183 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1581054288.tar
I0809 11:14:46.500537    2183 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1581054288.tar: stat -c "%s %y" /var/lib/minikube/build/build.1581054288.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1581054288.tar': No such file or directory
I0809 11:14:46.500581    2183 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1581054288.tar --> /var/lib/minikube/build/build.1581054288.tar (3072 bytes)
I0809 11:14:46.508953    2183 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1581054288
I0809 11:14:46.512753    2183 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1581054288 -xf /var/lib/minikube/build/build.1581054288.tar
I0809 11:14:46.515969    2183 docker.go:339] Building image: /var/lib/minikube/build/build.1581054288
I0809 11:14:46.516012    2183 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-901000 /var/lib/minikube/build/build.1581054288
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0809 11:14:48.063088    2183 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-901000 /var/lib/minikube/build/build.1581054288: (1.547116208s)
I0809 11:14:48.063151    2183 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1581054288
I0809 11:14:48.066085    2183 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1581054288.tar
I0809 11:14:48.068786    2183 build_images.go:207] Built localhost/my-image:functional-901000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1581054288.tar
I0809 11:14:48.068800    2183 build_images.go:123] succeeded building to: functional-901000
I0809 11:14:48.068803    2183 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.426603667s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-901000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr: (2.104381875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr: (1.527830791s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-901000 docker-env) && out/minikube-darwin-arm64 status -p functional-901000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-901000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.353927041s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-901000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-901000 image load --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr: (1.934963166s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image save gcr.io/google-containers/addon-resizer:functional-901000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image rm gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-901000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 image save --daemon gcr.io/google-containers/addon-resizer:functional-901000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-901000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-901000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-901000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-901000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-340000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-340000 --driver=qemu2 : (29.630602s)
--- PASS: TestImageBuild/serial/Setup (29.63s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-340000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-340000: (1.008704625s)
--- PASS: TestImageBuild/serial/NormalBuild (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-340000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-340000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (64.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-050000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-050000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m4.146703375s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (64.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons enable ingress --alsologtostderr -v=5: (14.885906166s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-050000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.27s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-204000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-204000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (45.450416209s)
--- PASS: TestJSONOutput/start/Command (45.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.29s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-204000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.29s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-204000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-204000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-204000 --output=json --user=testUser: (12.077038s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-227000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-227000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.666417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cba3df7f-29c5-41ec-a880-1d0a5f683b79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-227000] minikube v1.31.1 on Darwin 13.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76118746-367f-4b9e-b064-c7a649717bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17011"}}
	{"specversion":"1.0","id":"4ef76942-cc9e-4f51-8ff7-8280060048a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig"}}
	{"specversion":"1.0","id":"50c8a564-68e3-49b6-8f78-e892f863bad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"13015c2d-44eb-47ce-a730-e467a4571147","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b2248592-5118-44d4-8dda-ce2a7493307f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube"}}
	{"specversion":"1.0","id":"a1b68065-4fc9-41ae-89e8-fba79fdf457d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b9ae33ec-e580-46df-bf4c-e2ddee44ad84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-227000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-227000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (63.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-944000 --driver=qemu2 
E0809 11:18:42.208094    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.214926    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.226525    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.248556    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.290598    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.372659    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.534694    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:42.856718    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:43.498800    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:44.780861    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:47.342856    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:18:52.463381    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
E0809 11:19:02.704419    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-944000 --driver=qemu2 : (29.861343083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-945000 --driver=qemu2 
E0809 11:19:23.185871    1410 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17011-995/.minikube/profiles/functional-901000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-945000 --driver=qemu2 : (32.704120125s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-944000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-945000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-945000
helpers_test.go:175: Cleaning up "first-944000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-944000
--- PASS: TestMinikubeProfile (63.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-803000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (87.779208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-803000] minikube v1.31.1 on Darwin 13.5 (arm64)
	  - MINIKUBE_LOCATION=17011
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17011-995/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17011-995/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.391625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-803000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-803000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-803000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.839666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-803000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-469000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-469000 -n old-k8s-version-469000: exit status 7 (26.681459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-469000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-905000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-905000 -n no-preload-905000: exit status 7 (27.344166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-905000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-470000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (28.515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-470000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-708000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-708000 -n default-k8s-diff-port-708000: exit status 7 (27.85825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-708000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-644000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-644000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-644000 -n newest-cni-644000: exit status 7 (28.591667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-644000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/250)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1691604853839399000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1691604853839399000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1691604853839399000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001/test-1691604853839399000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.509041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (79.95375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (104.095542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (90.423875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (70.310458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.647375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (104.379125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "sudo umount -f /mount-9p": exit status 1 (63.534ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: no mount point specified.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-901000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3528485627/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1: exit status 1 (77.514291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (58.651417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (57.481584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (56.531709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (57.413167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (58.465583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/08/09 11:14:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-901000 ssh "findmnt -T" /mount2: exit status 1 (58.518875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-901000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1663234145/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.73s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-769000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-769000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-769000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-769000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-769000"

                                                
                                                
----------------------- debugLogs end: cilium-769000 [took: 2.087084708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-769000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-872000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-872000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard