Test Report: QEMU_macOS 17240

                    
                      ca8bf15b503bfa796ca02bce755f3a2820b75eb7:2023-09-19:31081
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.9
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.2
22 TestAddons/Setup 45.48
23 TestCertOptions 10.06
24 TestCertExpiration 195.46
25 TestDockerFlags 9.98
26 TestForceSystemdFlag 11.52
27 TestForceSystemdEnv 9.86
72 TestFunctional/parallel/ServiceCmdConnect 38.16
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
139 TestImageBuild/serial/BuildWithBuildArg 1.04
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 48.01
183 TestMountStart/serial/StartWithMountFirst 10.22
186 TestMultiNode/serial/FreshStart2Nodes 9.88
187 TestMultiNode/serial/DeployApp2Nodes 109.26
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.35
195 TestMultiNode/serial/DeleteNode 0.09
196 TestMultiNode/serial/StopMultiNode 0.14
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 20.23
202 TestPreload 9.95
204 TestScheduledStopUnix 10.04
205 TestSkaffold 11.92
208 TestRunningBinaryUpgrade 127
210 TestKubernetesUpgrade 15.31
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.49
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.05
225 TestStoppedBinaryUpgrade/Setup 145.81
227 TestPause/serial/Start 9.8
237 TestNoKubernetes/serial/StartWithK8s 9.78
238 TestNoKubernetes/serial/StartWithStopK8s 5.32
239 TestNoKubernetes/serial/Start 5.31
243 TestNoKubernetes/serial/StartNoArgs 5.29
245 TestNetworkPlugins/group/kindnet/Start 9.78
246 TestNetworkPlugins/group/auto/Start 9.89
247 TestNetworkPlugins/group/flannel/Start 9.71
248 TestNetworkPlugins/group/enable-default-cni/Start 9.79
249 TestNetworkPlugins/group/bridge/Start 9.76
250 TestNetworkPlugins/group/kubenet/Start 9.86
251 TestNetworkPlugins/group/custom-flannel/Start 9.71
252 TestStoppedBinaryUpgrade/Upgrade 2.2
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
254 TestNetworkPlugins/group/calico/Start 9.75
255 TestNetworkPlugins/group/false/Start 10.3
257 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
259 TestStartStop/group/no-preload/serial/FirstStart 10.27
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
264 TestStartStop/group/old-k8s-version/serial/SecondStart 6.92
265 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
266 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
267 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
268 TestStartStop/group/old-k8s-version/serial/Pause 0.1
270 TestStartStop/group/embed-certs/serial/FirstStart 11.46
271 TestStartStop/group/no-preload/serial/DeployApp 0.09
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
275 TestStartStop/group/no-preload/serial/SecondStart 7.07
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
279 TestStartStop/group/no-preload/serial/Pause 0.1
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.08
282 TestStartStop/group/embed-certs/serial/DeployApp 0.1
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 6.96
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/embed-certs/serial/Pause 0.1
292 TestStartStop/group/newest-cni/serial/FirstStart 11.39
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.03
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.25
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.09
x
+
TestDownloadOnly/v1.16.0/json-events (20.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-618000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-618000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.900150167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bcdc830c-b44a-4a08-b0a7-9411172bf48b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-618000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76ea073e-9170-4e0f-bf63-25c819fc4354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17240"}}
	{"specversion":"1.0","id":"b127471b-ddb5-43ae-bd9c-f8257c7d0c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig"}}
	{"specversion":"1.0","id":"f0d0adaf-7711-4cf7-9180-2337485627d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4ad9068d-4712-4aad-ac44-3ca020e0c2f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48b3ea10-9797-499c-8df7-91549b2414c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube"}}
	{"specversion":"1.0","id":"3f435fa4-74f0-4f13-8545-f6ae9ec4588d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"b6307126-9979-4680-b313-fd72c1b26504","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3b448c0-8abf-456c-9560-f8f0ef74712c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"74cc736d-3d9b-4dac-852e-a2d228b6fc3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb951cca-e30f-48f7-9cfd-3a4578ee4c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-618000 in cluster download-only-618000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5bb95b6-f173-4e73-9b20-2db37f13ce5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8eec78f-0f98-4685-acc4-124cfefea424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0] Decompressors:map[bz2:0x140005ae380 gz:0x140005ae388 tar:0x140005ae300 tar.bz2:0x140005ae340 tar.gz:0x140005ae350 tar.xz:0x140005ae360 tar.zst:0x140005ae370 tbz2:0x140005ae340 tgz:0x140005a
e350 txz:0x140005ae360 tzst:0x140005ae370 xz:0x140005ae390 zip:0x140005ae3a0 zst:0x140005ae398] Getters:map[file:0x140009d0b70 http:0x140000aa8c0 https:0x140000aa910] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"e85cff77-719a-4a6a-8635-77b7353a5577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:33:37.634767    2053 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:33:37.634906    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:37.634909    2053 out.go:309] Setting ErrFile to fd 2...
	I0919 09:33:37.634912    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:37.635036    2053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	W0919 09:33:37.635132    2053 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: no such file or directory
	I0919 09:33:37.636303    2053 out.go:303] Setting JSON to true
	I0919 09:33:37.653131    2053 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":191,"bootTime":1695141026,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:33:37.653201    2053 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:33:37.660329    2053 out.go:97] [download-only-618000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:33:37.663223    2053 out.go:169] MINIKUBE_LOCATION=17240
	I0919 09:33:37.660501    2053 notify.go:220] Checking for updates...
	W0919 09:33:37.660523    2053 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 09:33:37.670239    2053 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:33:37.673276    2053 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:33:37.676267    2053 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:33:37.679295    2053 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	W0919 09:33:37.685229    2053 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 09:33:37.685413    2053 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:33:37.690292    2053 out.go:97] Using the qemu2 driver based on user configuration
	I0919 09:33:37.690299    2053 start.go:298] selected driver: qemu2
	I0919 09:33:37.690313    2053 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:33:37.690375    2053 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:33:37.692371    2053 out.go:169] Automatically selected the socket_vmnet network
	I0919 09:33:37.698462    2053 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 09:33:37.698556    2053 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:33:37.698619    2053 cni.go:84] Creating CNI manager for ""
	I0919 09:33:37.698635    2053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:33:37.698640    2053 start_flags.go:321] config:
	{Name:download-only-618000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-618000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:33:37.704146    2053 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:33:37.708285    2053 out.go:97] Downloading VM boot image ...
	I0919 09:33:37.708314    2053 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I0919 09:33:46.142021    2053 out.go:97] Starting control plane node download-only-618000 in cluster download-only-618000
	I0919 09:33:46.142046    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:46.194539    2053 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:33:46.194548    2053 cache.go:57] Caching tarball of preloaded images
	I0919 09:33:46.194740    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:46.200366    2053 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0919 09:33:46.200372    2053 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:46.282691    2053 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:33:56.521599    2053 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:56.521727    2053 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:57.160573    2053 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 09:33:57.160762    2053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/download-only-618000/config.json ...
	I0919 09:33:57.160781    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/download-only-618000/config.json: {Name:mk6e0f8ffa2114774311c1ac6767974f1c2debb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:33:57.160989    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:57.161153    2053 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0919 09:33:58.469834    2053 out.go:169] 
	W0919 09:33:58.474991    2053 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0] Decompressors:map[bz2:0x140005ae380 gz:0x140005ae388 tar:0x140005ae300 tar.bz2:0x140005ae340 tar.gz:0x140005ae350 tar.xz:0x140005ae360 tar.zst:0x140005ae370 tbz2:0x140005ae340 tgz:0x140005ae350 txz:0x140005ae360 tzst:0x140005ae370 xz:0x140005ae390 zip:0x140005ae3a0 zst:0x140005ae398] Getters:map[file:0x140009d0b70 http:0x140000aa8c0 https:0x140000aa910] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0919 09:33:58.475021    2053 out_reason.go:110] 
	W0919 09:33:58.482970    2053 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:33:58.486931    2053 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-618000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (20.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-486000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-486000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.073539916s)

                                                
                                                
-- stdout --
	* [offline-docker-486000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-486000 in cluster offline-docker-486000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:46:49.512007    3615 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:46:49.512154    3615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:49.512158    3615 out.go:309] Setting ErrFile to fd 2...
	I0919 09:46:49.512161    3615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:49.512295    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:46:49.513448    3615 out.go:303] Setting JSON to false
	I0919 09:46:49.530376    3615 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":983,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:46:49.530473    3615 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:46:49.535612    3615 out.go:177] * [offline-docker-486000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:46:49.543513    3615 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:46:49.547521    3615 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:46:49.543523    3615 notify.go:220] Checking for updates...
	I0919 09:46:49.553470    3615 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:46:49.556533    3615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:46:49.559550    3615 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:46:49.562432    3615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:46:49.565889    3615 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:46:49.565951    3615 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:46:49.569479    3615 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:46:49.576485    3615 start.go:298] selected driver: qemu2
	I0919 09:46:49.576497    3615 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:46:49.576504    3615 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:46:49.578390    3615 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:46:49.581635    3615 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:46:49.583202    3615 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:46:49.583221    3615 cni.go:84] Creating CNI manager for ""
	I0919 09:46:49.583227    3615 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:46:49.583231    3615 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:46:49.583235    3615 start_flags.go:321] config:
	{Name:offline-docker-486000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:46:49.587533    3615 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:49.590553    3615 out.go:177] * Starting control plane node offline-docker-486000 in cluster offline-docker-486000
	I0919 09:46:49.598536    3615 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:46:49.598561    3615 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:46:49.598570    3615 cache.go:57] Caching tarball of preloaded images
	I0919 09:46:49.598654    3615 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:46:49.598661    3615 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:46:49.598721    3615 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/offline-docker-486000/config.json ...
	I0919 09:46:49.598732    3615 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/offline-docker-486000/config.json: {Name:mkc8618ababa198e7b9481aacd4956d14b2c48d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:46:49.598936    3615 start.go:365] acquiring machines lock for offline-docker-486000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:46:49.598963    3615 start.go:369] acquired machines lock for "offline-docker-486000" in 21.625µs
	I0919 09:46:49.598973    3615 start.go:93] Provisioning new machine with config: &{Name:offline-docker-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:46:49.599005    3615 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:46:49.603546    3615 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:46:49.617752    3615 start.go:159] libmachine.API.Create for "offline-docker-486000" (driver="qemu2")
	I0919 09:46:49.617776    3615 client.go:168] LocalClient.Create starting
	I0919 09:46:49.617850    3615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:46:49.617879    3615 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:49.617891    3615 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:49.617939    3615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:46:49.617957    3615 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:49.617964    3615 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:49.618294    3615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:46:49.738686    3615 main.go:141] libmachine: Creating SSH key...
	I0919 09:46:49.806524    3615 main.go:141] libmachine: Creating Disk image...
	I0919 09:46:49.806550    3615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:46:49.806742    3615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:49.821752    3615 main.go:141] libmachine: STDOUT: 
	I0919 09:46:49.821769    3615 main.go:141] libmachine: STDERR: 
	I0919 09:46:49.821821    3615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2 +20000M
	I0919 09:46:49.829470    3615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:46:49.829494    3615 main.go:141] libmachine: STDERR: 
	I0919 09:46:49.829514    3615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:49.829520    3615 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:46:49.829551    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:af:e7:27:19:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:49.831312    3615 main.go:141] libmachine: STDOUT: 
	I0919 09:46:49.831326    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:46:49.831344    3615 client.go:171] LocalClient.Create took 213.564667ms
	I0919 09:46:51.833403    3615 start.go:128] duration metric: createHost completed in 2.234423917s
	I0919 09:46:51.833448    3615 start.go:83] releasing machines lock for "offline-docker-486000", held for 2.234510916s
	W0919 09:46:51.833474    3615 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:51.844239    3615 out.go:177] * Deleting "offline-docker-486000" in qemu2 ...
	W0919 09:46:51.856385    3615 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:51.856395    3615 start.go:703] Will try again in 5 seconds ...
	I0919 09:46:56.858483    3615 start.go:365] acquiring machines lock for offline-docker-486000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:46:56.858955    3615 start.go:369] acquired machines lock for "offline-docker-486000" in 357.334µs
	I0919 09:46:56.859091    3615 start.go:93] Provisioning new machine with config: &{Name:offline-docker-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-486000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:46:56.859301    3615 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:46:56.863986    3615 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:46:56.909093    3615 start.go:159] libmachine.API.Create for "offline-docker-486000" (driver="qemu2")
	I0919 09:46:56.909139    3615 client.go:168] LocalClient.Create starting
	I0919 09:46:56.909266    3615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:46:56.909321    3615 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:56.909338    3615 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:56.909403    3615 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:46:56.909438    3615 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:56.909453    3615 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:56.909969    3615 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:46:57.034736    3615 main.go:141] libmachine: Creating SSH key...
	I0919 09:46:57.505456    3615 main.go:141] libmachine: Creating Disk image...
	I0919 09:46:57.505467    3615 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:46:57.505638    3615 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:57.514671    3615 main.go:141] libmachine: STDOUT: 
	I0919 09:46:57.514685    3615 main.go:141] libmachine: STDERR: 
	I0919 09:46:57.514733    3615 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2 +20000M
	I0919 09:46:57.521975    3615 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:46:57.521988    3615 main.go:141] libmachine: STDERR: 
	I0919 09:46:57.522002    3615 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:57.522009    3615 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:46:57.522057    3615 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c9:7c:88:a6:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/offline-docker-486000/disk.qcow2
	I0919 09:46:57.523613    3615 main.go:141] libmachine: STDOUT: 
	I0919 09:46:57.523626    3615 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:46:57.523639    3615 client.go:171] LocalClient.Create took 614.504708ms
	I0919 09:46:59.525661    3615 start.go:128] duration metric: createHost completed in 2.666375792s
	I0919 09:46:59.525676    3615 start.go:83] releasing machines lock for "offline-docker-486000", held for 2.666745084s
	W0919 09:46:59.525754    3615 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:59.535102    3615 out.go:177] 
	W0919 09:46:59.539053    3615 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:46:59.539062    3615 out.go:239] * 
	* 
	W0919 09:46:59.539500    3615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:46:59.550092    3615 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-486000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-09-19 09:46:59.560104 -0700 PDT m=+802.015736460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-486000 -n offline-docker-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-486000 -n offline-docker-486000: exit status 7 (31.205708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-486000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-486000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-486000
--- FAIL: TestOffline (10.20s)

                                                
                                    
x
+
TestAddons/Setup (45.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-305000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-305000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (45.472058417s)

                                                
                                                
-- stdout --
	* [addons-305000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-305000 in cluster addons-305000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	* Verifying ingress addon...
	* Verifying registry addon...
	* Verifying csi-hostpath-driver addon...
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:34:12.976294    2119 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:34:12.976419    2119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:34:12.976424    2119 out.go:309] Setting ErrFile to fd 2...
	I0919 09:34:12.976427    2119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:34:12.976543    2119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:34:12.977625    2119 out.go:303] Setting JSON to false
	I0919 09:34:12.992991    2119 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":226,"bootTime":1695141026,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:34:12.993055    2119 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:34:12.997658    2119 out.go:177] * [addons-305000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:34:13.004568    2119 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:34:13.004631    2119 notify.go:220] Checking for updates...
	I0919 09:34:13.010541    2119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:34:13.013579    2119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:34:13.015010    2119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:34:13.018510    2119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:34:13.021573    2119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:34:13.024737    2119 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:34:13.028582    2119 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:34:13.035611    2119 start.go:298] selected driver: qemu2
	I0919 09:34:13.035620    2119 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:34:13.035628    2119 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:34:13.037664    2119 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:34:13.041538    2119 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:34:13.044604    2119 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:34:13.044622    2119 cni.go:84] Creating CNI manager for ""
	I0919 09:34:13.044629    2119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:34:13.044633    2119 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:34:13.044643    2119 start_flags.go:321] config:
	{Name:addons-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0919 09:34:13.048846    2119 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:34:13.055500    2119 out.go:177] * Starting control plane node addons-305000 in cluster addons-305000
	I0919 09:34:13.059546    2119 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:34:13.059576    2119 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:34:13.059586    2119 cache.go:57] Caching tarball of preloaded images
	I0919 09:34:13.059638    2119 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:34:13.059643    2119 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:34:13.059815    2119 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/config.json ...
	I0919 09:34:13.059828    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/config.json: {Name:mkd7898709e8e2f1fd705b0a939dd4fecfef841c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:13.060034    2119 start.go:365] acquiring machines lock for addons-305000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:34:13.060100    2119 start.go:369] acquired machines lock for "addons-305000" in 60.291µs
	I0919 09:34:13.060111    2119 start.go:93] Provisioning new machine with config: &{Name:addons-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:34:13.060146    2119 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:34:13.068553    2119 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0919 09:34:13.331452    2119 start.go:159] libmachine.API.Create for "addons-305000" (driver="qemu2")
	I0919 09:34:13.331495    2119 client.go:168] LocalClient.Create starting
	I0919 09:34:13.331642    2119 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:34:13.401197    2119 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:34:13.440482    2119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:34:13.984648    2119 main.go:141] libmachine: Creating SSH key...
	I0919 09:34:14.208696    2119 main.go:141] libmachine: Creating Disk image...
	I0919 09:34:14.208721    2119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:34:14.208971    2119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2
	I0919 09:34:14.244189    2119 main.go:141] libmachine: STDOUT: 
	I0919 09:34:14.244220    2119 main.go:141] libmachine: STDERR: 
	I0919 09:34:14.244290    2119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2 +20000M
	I0919 09:34:14.251780    2119 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:34:14.251806    2119 main.go:141] libmachine: STDERR: 
	I0919 09:34:14.251829    2119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2
	I0919 09:34:14.251838    2119 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:34:14.251874    2119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:fc:6b:9f:ce:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/disk.qcow2
	I0919 09:34:14.320704    2119 main.go:141] libmachine: STDOUT: 
	I0919 09:34:14.320742    2119 main.go:141] libmachine: STDERR: 
	I0919 09:34:14.320746    2119 main.go:141] libmachine: Attempt 0
	I0919 09:34:14.320799    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:16.321975    2119 main.go:141] libmachine: Attempt 1
	I0919 09:34:16.322088    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:18.323437    2119 main.go:141] libmachine: Attempt 2
	I0919 09:34:18.323499    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:20.324580    2119 main.go:141] libmachine: Attempt 3
	I0919 09:34:20.324605    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:22.325683    2119 main.go:141] libmachine: Attempt 4
	I0919 09:34:22.325699    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:24.326749    2119 main.go:141] libmachine: Attempt 5
	I0919 09:34:24.326769    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:26.327864    2119 main.go:141] libmachine: Attempt 6
	I0919 09:34:26.327903    2119 main.go:141] libmachine: Searching for 4e:fc:6b:9f:ce:3d in /var/db/dhcpd_leases ...
	I0919 09:34:26.327992    2119 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0919 09:34:26.328026    2119 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:34:26.328032    2119 main.go:141] libmachine: Found match: 4e:fc:6b:9f:ce:3d
	I0919 09:34:26.328041    2119 main.go:141] libmachine: IP: 192.168.105.2
	I0919 09:34:26.328047    2119 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0919 09:34:28.346680    2119 machine.go:88] provisioning docker machine ...
	I0919 09:34:28.346736    2119 buildroot.go:166] provisioning hostname "addons-305000"
	I0919 09:34:28.348108    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:28.348817    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:28.348836    2119 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-305000 && echo "addons-305000" | sudo tee /etc/hostname
	I0919 09:34:28.427698    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-305000
	
	I0919 09:34:28.427841    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:28.428327    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:28.428342    2119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-305000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-305000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-305000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 09:34:28.489666    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 09:34:28.489687    2119 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17240-943/.minikube CaCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17240-943/.minikube}
	I0919 09:34:28.489703    2119 buildroot.go:174] setting up certificates
	I0919 09:34:28.489729    2119 provision.go:83] configureAuth start
	I0919 09:34:28.489734    2119 provision.go:138] copyHostCerts
	I0919 09:34:28.489869    2119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem (1082 bytes)
	I0919 09:34:28.490188    2119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem (1123 bytes)
	I0919 09:34:28.490339    2119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem (1679 bytes)
	I0919 09:34:28.490495    2119 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem org=jenkins.addons-305000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-305000]
	I0919 09:34:28.631736    2119 provision.go:172] copyRemoteCerts
	I0919 09:34:28.631821    2119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 09:34:28.631830    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:28.659139    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 09:34:28.665800    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0919 09:34:28.672964    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 09:34:28.680079    2119 provision.go:86] duration metric: configureAuth took 190.33875ms
	I0919 09:34:28.680088    2119 buildroot.go:189] setting minikube options for container-runtime
	I0919 09:34:28.680207    2119 config.go:182] Loaded profile config "addons-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:34:28.680240    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:28.680454    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:28.680494    2119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 09:34:28.730113    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 09:34:28.730122    2119 buildroot.go:70] root file system type: tmpfs
	I0919 09:34:28.730177    2119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 09:34:28.730217    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:28.730437    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:28.730471    2119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 09:34:28.782841    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 09:34:28.782889    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:28.783109    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:28.783120    2119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 09:34:29.110963    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 09:34:29.110976    2119 machine.go:91] provisioned docker machine in 764.281208ms
	I0919 09:34:29.110981    2119 client.go:171] LocalClient.Create took 15.779756917s
	I0919 09:34:29.110996    2119 start.go:167] duration metric: libmachine.API.Create for "addons-305000" took 15.779827125s
	I0919 09:34:29.111002    2119 start.go:300] post-start starting for "addons-305000" (driver="qemu2")
	I0919 09:34:29.111007    2119 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 09:34:29.111080    2119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 09:34:29.111089    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:29.138185    2119 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 09:34:29.139401    2119 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 09:34:29.139408    2119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/addons for local assets ...
	I0919 09:34:29.139477    2119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/files for local assets ...
	I0919 09:34:29.139506    2119 start.go:303] post-start completed in 28.501958ms
	I0919 09:34:29.139865    2119 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/config.json ...
	I0919 09:34:29.140031    2119 start.go:128] duration metric: createHost completed in 16.080161666s
	I0919 09:34:29.140067    2119 main.go:141] libmachine: Using SSH client type: native
	I0919 09:34:29.140278    2119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102940760] 0x102942ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0919 09:34:29.140283    2119 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 09:34:29.189374    2119 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695141269.469384460
	
	I0919 09:34:29.189385    2119 fix.go:206] guest clock: 1695141269.469384460
	I0919 09:34:29.189390    2119 fix.go:219] Guest: 2023-09-19 09:34:29.46938446 -0700 PDT Remote: 2023-09-19 09:34:29.140033 -0700 PDT m=+16.182217251 (delta=329.35146ms)
	I0919 09:34:29.189403    2119 fix.go:190] guest clock delta is within tolerance: 329.35146ms
	I0919 09:34:29.189406    2119 start.go:83] releasing machines lock for "addons-305000", held for 16.12958275s
	I0919 09:34:29.189786    2119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 09:34:29.189786    2119 ssh_runner.go:195] Run: cat /version.json
	I0919 09:34:29.189816    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:29.189831    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:29.214509    2119 ssh_runner.go:195] Run: systemctl --version
	I0919 09:34:29.216528    2119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 09:34:29.261179    2119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 09:34:29.261218    2119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 09:34:29.266343    2119 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 09:34:29.266351    2119 start.go:469] detecting cgroup driver to use...
	I0919 09:34:29.266465    2119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:34:29.272359    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 09:34:29.275375    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 09:34:29.278272    2119 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 09:34:29.278294    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 09:34:29.281743    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:34:29.285315    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 09:34:29.288769    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:34:29.291958    2119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 09:34:29.295052    2119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 09:34:29.297878    2119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 09:34:29.301127    2119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 09:34:29.304305    2119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:34:29.384201    2119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 09:34:29.393361    2119 start.go:469] detecting cgroup driver to use...
	I0919 09:34:29.393427    2119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 09:34:29.398702    2119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:34:29.403770    2119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 09:34:29.413235    2119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:34:29.418299    2119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:34:29.423011    2119 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 09:34:29.459855    2119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:34:29.464991    2119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:34:29.470142    2119 ssh_runner.go:195] Run: which cri-dockerd
	I0919 09:34:29.471449    2119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 09:34:29.473935    2119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 09:34:29.478818    2119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 09:34:29.553983    2119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 09:34:29.634380    2119 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 09:34:29.634394    2119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 09:34:29.639641    2119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:34:29.714703    2119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:34:30.870639    2119 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155940208s)
	I0919 09:34:30.870691    2119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 09:34:30.946120    2119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 09:34:31.022110    2119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 09:34:31.103170    2119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:34:31.183662    2119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 09:34:31.190348    2119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:34:31.262192    2119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 09:34:31.285971    2119 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 09:34:31.286072    2119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 09:34:31.288342    2119 start.go:537] Will wait 60s for crictl version
	I0919 09:34:31.288384    2119 ssh_runner.go:195] Run: which crictl
	I0919 09:34:31.290711    2119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 09:34:31.308909    2119 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 09:34:31.308974    2119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:34:31.318705    2119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:34:31.333360    2119 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 09:34:31.333501    2119 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0919 09:34:31.335056    2119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:34:31.338510    2119 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:34:31.338553    2119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:34:31.343727    2119 docker.go:636] Got preloaded images: 
	I0919 09:34:31.343735    2119 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0919 09:34:31.343774    2119 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:34:31.346443    2119 ssh_runner.go:195] Run: which lz4
	I0919 09:34:31.347872    2119 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 09:34:31.349187    2119 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 09:34:31.349199    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0919 09:34:32.654678    2119 docker.go:600] Took 1.306859 seconds to copy over tarball
	I0919 09:34:32.654735    2119 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 09:34:33.672906    2119 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.018174959s)
	I0919 09:34:33.672919    2119 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 09:34:33.688715    2119 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:34:33.691861    2119 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0919 09:34:33.696897    2119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:34:33.779329    2119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:34:35.887690    2119 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.108380292s)
	I0919 09:34:35.887793    2119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:34:35.897889    2119 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 09:34:35.897902    2119 cache_images.go:84] Images are preloaded, skipping loading
	I0919 09:34:35.897948    2119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 09:34:35.905639    2119 cni.go:84] Creating CNI manager for ""
	I0919 09:34:35.905649    2119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:34:35.905668    2119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 09:34:35.905677    2119 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-305000 NodeName:addons-305000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 09:34:35.905746    2119 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-305000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 09:34:35.905787    2119 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-305000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 09:34:35.905838    2119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 09:34:35.909433    2119 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 09:34:35.909464    2119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 09:34:35.912642    2119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0919 09:34:35.917968    2119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 09:34:35.923181    2119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0919 09:34:35.928109    2119 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0919 09:34:35.929463    2119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:34:35.933431    2119 certs.go:56] Setting up /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000 for IP: 192.168.105.2
	I0919 09:34:35.933451    2119 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e0a0ed9a6157106206482b1c6d1a127cc10e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:35.933598    2119 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key
	I0919 09:34:36.067350    2119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt ...
	I0919 09:34:36.067358    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt: {Name:mk8f88e1aba2f94bf07099f5a7dfab31eb649b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.067622    2119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key ...
	I0919 09:34:36.067626    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key: {Name:mk6f1c3dbfa93aebe65699efc03dc6af95c6ab17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.067740    2119 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key
	I0919 09:34:36.222030    2119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt ...
	I0919 09:34:36.222038    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt: {Name:mk93f66c527bac7157c7ef6d7dbbe6d2b0712d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.222187    2119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key ...
	I0919 09:34:36.222190    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key: {Name:mk418711ac093fbcc211e9f9f0e37e4fefb4ae9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.222329    2119 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.key
	I0919 09:34:36.222339    2119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.crt with IP's: []
	I0919 09:34:36.325083    2119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.crt ...
	I0919 09:34:36.325095    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.crt: {Name:mk215a62bae3db4e71e2f50545c10944bb43d317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.325309    2119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.key ...
	I0919 09:34:36.325312    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/client.key: {Name:mkf9f2f0f925a0c6dda98f13a0722392ee10f3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.325407    2119 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key.96055969
	I0919 09:34:36.325417    2119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 09:34:36.391852    2119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt.96055969 ...
	I0919 09:34:36.391856    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt.96055969: {Name:mkbad3bf4f7365d8ee6432387166c815c7910755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.391981    2119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key.96055969 ...
	I0919 09:34:36.391983    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key.96055969: {Name:mk33613468daf5fbf391663228bfed2706080684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.392079    2119 certs.go:337] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt
	I0919 09:34:36.392300    2119 certs.go:341] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key
	I0919 09:34:36.392415    2119 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.key
	I0919 09:34:36.392426    2119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.crt with IP's: []
	I0919 09:34:36.440494    2119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.crt ...
	I0919 09:34:36.440498    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.crt: {Name:mk68f14f884ae1316606a19a21be2f298169e400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.440642    2119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.key ...
	I0919 09:34:36.440646    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.key: {Name:mk044266b8e1b0d9cf2d6bc0fb7493e4164a799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:36.440910    2119 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 09:34:36.440935    2119 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem (1082 bytes)
	I0919 09:34:36.440957    2119 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem (1123 bytes)
	I0919 09:34:36.440980    2119 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem (1679 bytes)
	I0919 09:34:36.441412    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 09:34:36.449576    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 09:34:36.456933    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 09:34:36.464219    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/addons-305000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 09:34:36.471296    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 09:34:36.477888    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 09:34:36.485029    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 09:34:36.492276    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 09:34:36.499021    2119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 09:34:36.505735    2119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 09:34:36.511713    2119 ssh_runner.go:195] Run: openssl version
	I0919 09:34:36.513722    2119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 09:34:36.517330    2119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:34:36.519105    2119 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:34 /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:34:36.519127    2119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:34:36.520918    2119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 09:34:36.524203    2119 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 09:34:36.525495    2119 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 09:34:36.525533    2119 kubeadm.go:404] StartCluster: {Name:addons-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-305000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:34:36.525597    2119 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 09:34:36.535446    2119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 09:34:36.538393    2119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 09:34:36.541404    2119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 09:34:36.544488    2119 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 09:34:36.544501    2119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 09:34:36.568018    2119 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 09:34:36.568046    2119 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 09:34:36.624636    2119 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 09:34:36.624696    2119 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 09:34:36.624791    2119 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 09:34:36.731007    2119 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 09:34:36.739184    2119 out.go:204]   - Generating certificates and keys ...
	I0919 09:34:36.739215    2119 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 09:34:36.739251    2119 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 09:34:36.770949    2119 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 09:34:36.821877    2119 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 09:34:36.911283    2119 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 09:34:37.204806    2119 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 09:34:37.355756    2119 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 09:34:37.355827    2119 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-305000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0919 09:34:37.437911    2119 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 09:34:37.437994    2119 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-305000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0919 09:34:37.737675    2119 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 09:34:37.805887    2119 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 09:34:37.878219    2119 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 09:34:37.878252    2119 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 09:34:37.978137    2119 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 09:34:38.058458    2119 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 09:34:38.217061    2119 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 09:34:38.607673    2119 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 09:34:38.607869    2119 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 09:34:38.609232    2119 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 09:34:38.613468    2119 out.go:204]   - Booting up control plane ...
	I0919 09:34:38.613560    2119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 09:34:38.613624    2119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 09:34:38.613656    2119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 09:34:38.617862    2119 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 09:34:38.618342    2119 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 09:34:38.618442    2119 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 09:34:38.702920    2119 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 09:34:42.204226    2119 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501188 seconds
	I0919 09:34:42.204289    2119 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 09:34:42.210115    2119 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 09:34:42.719702    2119 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 09:34:42.719796    2119 kubeadm.go:322] [mark-control-plane] Marking the node addons-305000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 09:34:43.224901    2119 kubeadm.go:322] [bootstrap-token] Using token: rs4jz8.5xtdupmg7s6ax5a8
	I0919 09:34:43.229307    2119 out.go:204]   - Configuring RBAC rules ...
	I0919 09:34:43.229370    2119 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 09:34:43.238330    2119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 09:34:43.241354    2119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 09:34:43.242586    2119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 09:34:43.244129    2119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 09:34:43.245597    2119 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 09:34:43.250152    2119 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 09:34:43.431449    2119 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 09:34:43.640898    2119 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 09:34:43.641289    2119 kubeadm.go:322] 
	I0919 09:34:43.641318    2119 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 09:34:43.641329    2119 kubeadm.go:322] 
	I0919 09:34:43.641367    2119 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 09:34:43.641370    2119 kubeadm.go:322] 
	I0919 09:34:43.641382    2119 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 09:34:43.641425    2119 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 09:34:43.641450    2119 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 09:34:43.641459    2119 kubeadm.go:322] 
	I0919 09:34:43.641492    2119 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 09:34:43.641496    2119 kubeadm.go:322] 
	I0919 09:34:43.641520    2119 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 09:34:43.641524    2119 kubeadm.go:322] 
	I0919 09:34:43.641552    2119 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 09:34:43.641587    2119 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 09:34:43.641623    2119 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 09:34:43.641626    2119 kubeadm.go:322] 
	I0919 09:34:43.641662    2119 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 09:34:43.641699    2119 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 09:34:43.641701    2119 kubeadm.go:322] 
	I0919 09:34:43.641755    2119 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rs4jz8.5xtdupmg7s6ax5a8 \
	I0919 09:34:43.641844    2119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc \
	I0919 09:34:43.641857    2119 kubeadm.go:322] 	--control-plane 
	I0919 09:34:43.641860    2119 kubeadm.go:322] 
	I0919 09:34:43.641912    2119 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 09:34:43.641915    2119 kubeadm.go:322] 
	I0919 09:34:43.641956    2119 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rs4jz8.5xtdupmg7s6ax5a8 \
	I0919 09:34:43.642004    2119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc 
	I0919 09:34:43.642116    2119 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 09:34:43.642124    2119 cni.go:84] Creating CNI manager for ""
	I0919 09:34:43.642134    2119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:34:43.649913    2119 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 09:34:43.653827    2119 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 09:34:43.656923    2119 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 09:34:43.661763    2119 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 09:34:43.661807    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:43.661863    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=addons-305000 minikube.k8s.io/updated_at=2023_09_19T09_34_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:43.717108    2119 ops.go:34] apiserver oom_adj: -16
	I0919 09:34:43.717132    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:43.758847    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:44.293443    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:44.793435    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:45.293446    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:45.793520    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:46.293429    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:46.793474    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:47.293437    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:47.793437    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:48.293383    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:48.793454    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:49.293400    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:49.792696    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:50.293381    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:50.793363    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:51.293379    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:51.793383    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:52.293343    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:52.793382    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:53.293342    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:53.793331    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:54.293292    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:54.793318    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:55.293327    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:55.793270    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:56.293247    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:56.793225    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:57.293237    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:57.793213    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:58.293230    2119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:34:58.331319    2119 kubeadm.go:1081] duration metric: took 14.669797334s to wait for elevateKubeSystemPrivileges.
	I0919 09:34:58.331335    2119 kubeadm.go:406] StartCluster complete in 21.806183s
	I0919 09:34:58.331345    2119 settings.go:142] acquiring lock: {Name:mk7316c4de97357fafef76bf7f58c3638d00d866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:58.331512    2119 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:34:58.332191    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/kubeconfig: {Name:mk0534d05ae1a49ed75724777911378ef3989658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:34:58.332625    2119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 09:34:58.332665    2119 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0919 09:34:58.332725    2119 addons.go:69] Setting volumesnapshots=true in profile "addons-305000"
	I0919 09:34:58.332731    2119 addons.go:231] Setting addon volumesnapshots=true in "addons-305000"
	I0919 09:34:58.332736    2119 addons.go:69] Setting metrics-server=true in profile "addons-305000"
	I0919 09:34:58.332745    2119 addons.go:231] Setting addon metrics-server=true in "addons-305000"
	I0919 09:34:58.332750    2119 addons.go:69] Setting registry=true in profile "addons-305000"
	I0919 09:34:58.332756    2119 addons.go:231] Setting addon registry=true in "addons-305000"
	I0919 09:34:58.332759    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.332762    2119 addons.go:69] Setting storage-provisioner=true in profile "addons-305000"
	I0919 09:34:58.332756    2119 addons.go:69] Setting ingress=true in profile "addons-305000"
	I0919 09:34:58.332770    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.332774    2119 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-305000"
	I0919 09:34:58.332776    2119 addons.go:69] Setting ingress-dns=true in profile "addons-305000"
	I0919 09:34:58.332804    2119 addons.go:231] Setting addon ingress-dns=true in "addons-305000"
	I0919 09:34:58.332766    2119 addons.go:231] Setting addon storage-provisioner=true in "addons-305000"
	I0919 09:34:58.332823    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.332834    2119 addons.go:69] Setting cloud-spanner=true in profile "addons-305000"
	I0919 09:34:58.332843    2119 addons.go:69] Setting default-storageclass=true in profile "addons-305000"
	I0919 09:34:58.332876    2119 addons.go:231] Setting addon cloud-spanner=true in "addons-305000"
	I0919 09:34:58.332883    2119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-305000"
	I0919 09:34:58.332892    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.332912    2119 config.go:182] Loaded profile config "addons-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:34:58.332944    2119 addons.go:69] Setting gcp-auth=true in profile "addons-305000"
	I0919 09:34:58.332950    2119 mustload.go:65] Loading cluster: addons-305000
	I0919 09:34:58.332975    2119 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-305000"
	I0919 09:34:58.333012    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.333019    2119 config.go:182] Loaded profile config "addons-305000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:34:58.332759    2119 host.go:66] Checking if "addons-305000" exists ...
	W0919 09:34:58.333089    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.333097    2119 addons.go:277] "addons-305000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0919 09:34:58.332804    2119 addons.go:231] Setting addon ingress=true in "addons-305000"
	I0919 09:34:58.333110    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.337651    2119 out.go:177] 
	I0919 09:34:58.332893    2119 host.go:66] Checking if "addons-305000" exists ...
	I0919 09:34:58.332770    2119 addons.go:69] Setting inspektor-gadget=true in profile "addons-305000"
	W0919 09:34:58.333346    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.333365    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.333413    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.333498    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.333510    2119 host.go:54] host status for "addons-305000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	I0919 09:34:58.341674    2119 addons.go:231] Setting addon inspektor-gadget=true in "addons-305000"
	W0919 09:34:58.341698    2119 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	W0919 09:34:58.341715    2119 addons.go:277] "addons-305000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0919 09:34:58.341719    2119 addons.go:277] "addons-305000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0919 09:34:58.341725    2119 addons_storage_classes.go:55] "addons-305000" is not running, writing default-storageclass=true to disk and skipping enablement
	W0919 09:34:58.341729    2119 addons.go:277] "addons-305000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0919 09:34:58.341730    2119 addons.go:277] "addons-305000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/monitor: connect: connection refused
	I0919 09:34:58.345520    2119 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0919 09:34:58.348656    2119 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 09:34:58.348665    2119 addons.go:467] Verifying addon ingress=true in "addons-305000"
	I0919 09:34:58.348669    2119 addons.go:467] Verifying addon registry=true in "addons-305000"
	I0919 09:34:58.348673    2119 addons.go:231] Setting addon default-storageclass=true in "addons-305000"
	I0919 09:34:58.348676    2119 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-305000"
	I0919 09:34:58.348752    2119 host.go:66] Checking if "addons-305000" exists ...
	W0919 09:34:58.348770    2119 out.go:239] * 
	I0919 09:34:58.352664    2119 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	* 
	I0919 09:34:58.372511    2119 out.go:177] * Verifying ingress addon...
	I0919 09:34:58.352730    2119 host.go:66] Checking if "addons-305000" exists ...
	W0919 09:34:58.353356    2119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:34:58.359916    2119 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-305000" context rescaled to 1 replicas
	I0919 09:34:58.368647    2119 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0919 09:34:58.379642    2119 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 09:34:58.380429    2119 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 09:34:58.383717    2119 out.go:177] * Verifying registry addon...
	I0919 09:34:58.387507    2119 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 09:34:58.391667    2119 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 09:34:58.394755    2119 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0919 09:34:58.394761    2119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 09:34:58.394790    2119 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:34:58.396212    2119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 09:34:58.397644    2119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 09:34:58.397651    2119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 09:34:58.397629    2119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 09:34:58.398055    2119 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 09:34:58.400679    2119 out.go:177] 
	I0919 09:34:58.400709    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:58.400690    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:58.403665    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:58.404046    2119 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 09:34:58.409735    2119 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/addons-305000/id_rsa Username:docker}
	I0919 09:34:58.410037    2119 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 09:34:58.412724    2119 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 09:34:58.415087    2119 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 09:34:58.416733    2119 out.go:177] * Verifying Kubernetes components...
	I0919 09:34:58.418995    2119 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-305000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (45.48s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.791670458s)

                                                
                                                
-- stdout --
	* [cert-options-386000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-386000 in cluster cert-options-386000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-386000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-386000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-386000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.985625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-386000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-386000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-386000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-386000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (37.018292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-386000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-386000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-19 09:47:29.465661 -0700 PDT m=+831.921814210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-386000 -n cert-options-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-386000 -n cert-options-386000: exit status 7 (27.244208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-386000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-386000
--- FAIL: TestCertOptions (10.06s)
E0919 09:47:44.386773    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:48:12.095012    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:48:36.634560    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E0919 09:47:14.713467    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.068792667s)

                                                
                                                
-- stdout --
	* [cert-expiration-744000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2244595s)

                                                
                                                
-- stdout --
	* [cert-expiration-744000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-744000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-19 09:50:29.674529 -0700 PDT m=+1012.133821043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-744000 -n cert-expiration-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-744000 -n cert-expiration-744000: exit status 7 (63.8705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-744000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-744000
--- FAIL: TestCertExpiration (195.46s)

                                                
                                    
x
+
TestDockerFlags (9.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-137000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-137000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.738600416s)

                                                
                                                
-- stdout --
	* [docker-flags-137000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-137000 in cluster docker-flags-137000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:47:09.574278    3815 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:47:09.574399    3815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:47:09.574402    3815 out.go:309] Setting ErrFile to fd 2...
	I0919 09:47:09.574405    3815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:47:09.574531    3815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:47:09.575546    3815 out.go:303] Setting JSON to false
	I0919 09:47:09.590940    3815 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1003,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:47:09.591024    3815 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:47:09.596514    3815 out.go:177] * [docker-flags-137000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:47:09.604363    3815 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:47:09.608233    3815 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:47:09.604419    3815 notify.go:220] Checking for updates...
	I0919 09:47:09.611421    3815 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:47:09.614354    3815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:47:09.617383    3815 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:47:09.624345    3815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:47:09.629685    3815 config.go:182] Loaded profile config "force-systemd-flag-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:47:09.629761    3815 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:47:09.629816    3815 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:47:09.631862    3815 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:47:09.640351    3815 start.go:298] selected driver: qemu2
	I0919 09:47:09.640358    3815 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:47:09.640364    3815 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:47:09.642414    3815 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:47:09.647342    3815 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:47:09.651448    3815 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0919 09:47:09.651471    3815 cni.go:84] Creating CNI manager for ""
	I0919 09:47:09.651478    3815 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:47:09.651482    3815 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:47:09.651488    3815 start_flags.go:321] config:
	{Name:docker-flags-137000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:47:09.655688    3815 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:47:09.663358    3815 out.go:177] * Starting control plane node docker-flags-137000 in cluster docker-flags-137000
	I0919 09:47:09.666344    3815 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:47:09.666364    3815 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:47:09.666374    3815 cache.go:57] Caching tarball of preloaded images
	I0919 09:47:09.666436    3815 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:47:09.666442    3815 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:47:09.666519    3815 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/docker-flags-137000/config.json ...
	I0919 09:47:09.666536    3815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/docker-flags-137000/config.json: {Name:mkdafb0b05164686eb5dab484df8183291618dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:47:09.666759    3815 start.go:365] acquiring machines lock for docker-flags-137000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:47:09.666791    3815 start.go:369] acquired machines lock for "docker-flags-137000" in 25.708µs
	I0919 09:47:09.666803    3815 start.go:93] Provisioning new machine with config: &{Name:docker-flags-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:47:09.666840    3815 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:47:09.674339    3815 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:47:09.691314    3815 start.go:159] libmachine.API.Create for "docker-flags-137000" (driver="qemu2")
	I0919 09:47:09.691339    3815 client.go:168] LocalClient.Create starting
	I0919 09:47:09.691400    3815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:47:09.691427    3815 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:09.691441    3815 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:09.691485    3815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:47:09.691505    3815 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:09.691514    3815 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:09.691863    3815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:47:09.809156    3815 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:09.874332    3815 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:09.874338    3815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:09.874482    3815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:09.883114    3815 main.go:141] libmachine: STDOUT: 
	I0919 09:47:09.883128    3815 main.go:141] libmachine: STDERR: 
	I0919 09:47:09.883190    3815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2 +20000M
	I0919 09:47:09.890350    3815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:09.890363    3815 main.go:141] libmachine: STDERR: 
	I0919 09:47:09.890376    3815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:09.890383    3815 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:09.890420    3815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:33:ce:ca:f3:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:09.891942    3815 main.go:141] libmachine: STDOUT: 
	I0919 09:47:09.891960    3815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:09.891976    3815 client.go:171] LocalClient.Create took 200.635917ms
	I0919 09:47:11.894195    3815 start.go:128] duration metric: createHost completed in 2.227312416s
	I0919 09:47:11.894281    3815 start.go:83] releasing machines lock for "docker-flags-137000", held for 2.227516459s
	W0919 09:47:11.894336    3815 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:11.904447    3815 out.go:177] * Deleting "docker-flags-137000" in qemu2 ...
	W0919 09:47:11.924258    3815 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:11.924344    3815 start.go:703] Will try again in 5 seconds ...
	I0919 09:47:16.926500    3815 start.go:365] acquiring machines lock for docker-flags-137000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:47:16.926853    3815 start.go:369] acquired machines lock for "docker-flags-137000" in 278.875µs
	I0919 09:47:16.926993    3815 start.go:93] Provisioning new machine with config: &{Name:docker-flags-137000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-137000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:47:16.927246    3815 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:47:16.936607    3815 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:47:16.983693    3815 start.go:159] libmachine.API.Create for "docker-flags-137000" (driver="qemu2")
	I0919 09:47:16.983745    3815 client.go:168] LocalClient.Create starting
	I0919 09:47:16.983867    3815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:47:16.983933    3815 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:16.983959    3815 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:16.984023    3815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:47:16.984064    3815 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:16.984082    3815 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:16.984837    3815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:47:17.113812    3815 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:17.228657    3815 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:17.228662    3815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:17.228807    3815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:17.237329    3815 main.go:141] libmachine: STDOUT: 
	I0919 09:47:17.237349    3815 main.go:141] libmachine: STDERR: 
	I0919 09:47:17.237402    3815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2 +20000M
	I0919 09:47:17.244476    3815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:17.244491    3815 main.go:141] libmachine: STDERR: 
	I0919 09:47:17.244503    3815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:17.244511    3815 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:17.244562    3815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e3:46:18:69:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/docker-flags-137000/disk.qcow2
	I0919 09:47:17.246031    3815 main.go:141] libmachine: STDOUT: 
	I0919 09:47:17.246047    3815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:17.246061    3815 client.go:171] LocalClient.Create took 262.314208ms
	I0919 09:47:19.248203    3815 start.go:128] duration metric: createHost completed in 2.320970416s
	I0919 09:47:19.248291    3815 start.go:83] releasing machines lock for "docker-flags-137000", held for 2.32143325s
	W0919 09:47:19.248693    3815 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:19.258412    3815 out.go:177] 
	W0919 09:47:19.262432    3815 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:47:19.262480    3815 out.go:239] * 
	* 
	W0919 09:47:19.265306    3815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:47:19.274367    3815 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-137000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-137000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-137000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (73.386125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-137000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-137000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-137000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-137000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-137000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-137000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (42.809708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-137000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-137000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-137000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-137000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-09-19 09:47:19.406514 -0700 PDT m=+821.862492668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-137000 -n docker-flags-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-137000 -n docker-flags-137000: exit status 7 (26.604584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-137000
--- FAIL: TestDockerFlags (9.98s)

                                                
                                    
x
+
TestForceSystemdFlag (11.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-094000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-094000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.334650291s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-094000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-094000 in cluster force-systemd-flag-094000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-094000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:47:02.885956    3790 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:47:02.886106    3790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:47:02.886108    3790 out.go:309] Setting ErrFile to fd 2...
	I0919 09:47:02.886111    3790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:47:02.886249    3790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:47:02.887209    3790 out.go:303] Setting JSON to false
	I0919 09:47:02.902087    3790 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":996,"bootTime":1695141026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:47:02.902147    3790 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:47:02.907766    3790 out.go:177] * [force-systemd-flag-094000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:47:02.914786    3790 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:47:02.914842    3790 notify.go:220] Checking for updates...
	I0919 09:47:02.918726    3790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:47:02.921798    3790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:47:02.924713    3790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:47:02.927701    3790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:47:02.930708    3790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:47:02.933957    3790 config.go:182] Loaded profile config "force-systemd-env-863000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:47:02.934025    3790 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:47:02.934066    3790 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:47:02.937684    3790 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:47:02.944675    3790 start.go:298] selected driver: qemu2
	I0919 09:47:02.944682    3790 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:47:02.944688    3790 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:47:02.946778    3790 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:47:02.950685    3790 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:47:02.954800    3790 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:47:02.954817    3790 cni.go:84] Creating CNI manager for ""
	I0919 09:47:02.954824    3790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:47:02.954827    3790 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:47:02.954831    3790 start_flags.go:321] config:
	{Name:force-systemd-flag-094000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:47:02.958996    3790 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:47:02.967736    3790 out.go:177] * Starting control plane node force-systemd-flag-094000 in cluster force-systemd-flag-094000
	I0919 09:47:02.971589    3790 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:47:02.971605    3790 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:47:02.971612    3790 cache.go:57] Caching tarball of preloaded images
	I0919 09:47:02.971663    3790 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:47:02.971668    3790 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:47:02.971713    3790 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/force-systemd-flag-094000/config.json ...
	I0919 09:47:02.971724    3790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/force-systemd-flag-094000/config.json: {Name:mkce494be6d2430f6c66d9f527a276c55090a7d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:47:02.971910    3790 start.go:365] acquiring machines lock for force-systemd-flag-094000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:47:02.971940    3790 start.go:369] acquired machines lock for "force-systemd-flag-094000" in 22.125µs
	I0919 09:47:02.971952    3790 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:47:02.971979    3790 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:47:02.979711    3790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:47:02.994787    3790 start.go:159] libmachine.API.Create for "force-systemd-flag-094000" (driver="qemu2")
	I0919 09:47:02.994820    3790 client.go:168] LocalClient.Create starting
	I0919 09:47:02.994882    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:47:02.994909    3790 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:02.994919    3790 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:02.994958    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:47:02.994975    3790 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:02.994981    3790 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:02.995293    3790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:47:03.111906    3790 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:03.344257    3790 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:03.344269    3790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:03.344448    3790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:03.353572    3790 main.go:141] libmachine: STDOUT: 
	I0919 09:47:03.353590    3790 main.go:141] libmachine: STDERR: 
	I0919 09:47:03.353659    3790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2 +20000M
	I0919 09:47:03.361006    3790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:03.361018    3790 main.go:141] libmachine: STDERR: 
	I0919 09:47:03.361039    3790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:03.361051    3790 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:03.361095    3790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f9:ca:b6:24:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:03.362622    3790 main.go:141] libmachine: STDOUT: 
	I0919 09:47:03.362634    3790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:03.362655    3790 client.go:171] LocalClient.Create took 367.837208ms
	I0919 09:47:05.364938    3790 start.go:128] duration metric: createHost completed in 2.392919959s
	I0919 09:47:05.365046    3790 start.go:83] releasing machines lock for "force-systemd-flag-094000", held for 2.393137292s
	W0919 09:47:05.365102    3790 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:05.371721    3790 out.go:177] * Deleting "force-systemd-flag-094000" in qemu2 ...
	W0919 09:47:05.391970    3790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:05.392000    3790 start.go:703] Will try again in 5 seconds ...
	I0919 09:47:10.394131    3790 start.go:365] acquiring machines lock for force-systemd-flag-094000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:47:11.894450    3790 start.go:369] acquired machines lock for "force-systemd-flag-094000" in 1.500217333s
	I0919 09:47:11.894617    3790 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:47:11.894854    3790 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:47:11.901542    3790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:47:11.946785    3790 start.go:159] libmachine.API.Create for "force-systemd-flag-094000" (driver="qemu2")
	I0919 09:47:11.946819    3790 client.go:168] LocalClient.Create starting
	I0919 09:47:11.946945    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:47:11.947009    3790 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:11.947025    3790 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:11.947094    3790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:47:11.947131    3790 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:11.947147    3790 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:11.947614    3790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:47:12.076628    3790 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:12.134643    3790 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:12.134649    3790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:12.134786    3790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:12.143321    3790 main.go:141] libmachine: STDOUT: 
	I0919 09:47:12.143334    3790 main.go:141] libmachine: STDERR: 
	I0919 09:47:12.143383    3790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2 +20000M
	I0919 09:47:12.150497    3790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:12.150510    3790 main.go:141] libmachine: STDERR: 
	I0919 09:47:12.150523    3790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:12.150530    3790 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:12.150573    3790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:cc:91:fc:34:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-flag-094000/disk.qcow2
	I0919 09:47:12.152066    3790 main.go:141] libmachine: STDOUT: 
	I0919 09:47:12.152079    3790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:12.152091    3790 client.go:171] LocalClient.Create took 205.269958ms
	I0919 09:47:14.154266    3790 start.go:128] duration metric: createHost completed in 2.259411416s
	I0919 09:47:14.154355    3790 start.go:83] releasing machines lock for "force-systemd-flag-094000", held for 2.259908208s
	W0919 09:47:14.154823    3790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-094000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-094000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:14.165560    3790 out.go:177] 
	W0919 09:47:14.169547    3790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:47:14.169580    3790 out.go:239] * 
	* 
	W0919 09:47:14.177477    3790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:47:14.183472    3790 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-094000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-094000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-094000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (62.481291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-094000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-094000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-19 09:47:14.259781 -0700 PDT m=+816.715669460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-094000 -n force-systemd-flag-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-094000 -n force-systemd-flag-094000: exit status 7 (31.93925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-094000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-094000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-094000
--- FAIL: TestForceSystemdFlag (11.52s)

                                                
                                    
x
+
TestForceSystemdEnv (9.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-863000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-863000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.662715875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-863000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-863000 in cluster force-systemd-env-863000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-863000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:46:59.714254    3771 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:46:59.714392    3771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:59.714395    3771 out.go:309] Setting ErrFile to fd 2...
	I0919 09:46:59.714398    3771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:59.714550    3771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:46:59.715634    3771 out.go:303] Setting JSON to false
	I0919 09:46:59.731862    3771 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":993,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:46:59.731943    3771 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:46:59.736080    3771 out.go:177] * [force-systemd-env-863000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:46:59.744104    3771 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:46:59.748035    3771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:46:59.744164    3771 notify.go:220] Checking for updates...
	I0919 09:46:59.754098    3771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:46:59.757065    3771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:46:59.764071    3771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:46:59.771046    3771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0919 09:46:59.775499    3771 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:46:59.775571    3771 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:46:59.779060    3771 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:46:59.786078    3771 start.go:298] selected driver: qemu2
	I0919 09:46:59.786093    3771 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:46:59.786100    3771 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:46:59.788643    3771 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:46:59.791048    3771 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:46:59.794159    3771 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:46:59.794197    3771 cni.go:84] Creating CNI manager for ""
	I0919 09:46:59.794206    3771 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:46:59.794213    3771 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:46:59.794219    3771 start_flags.go:321] config:
	{Name:force-systemd-env-863000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-863000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:46:59.799550    3771 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:59.807033    3771 out.go:177] * Starting control plane node force-systemd-env-863000 in cluster force-systemd-env-863000
	I0919 09:46:59.811093    3771 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:46:59.811136    3771 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:46:59.811145    3771 cache.go:57] Caching tarball of preloaded images
	I0919 09:46:59.811241    3771 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:46:59.811247    3771 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:46:59.811318    3771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/force-systemd-env-863000/config.json ...
	I0919 09:46:59.811330    3771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/force-systemd-env-863000/config.json: {Name:mk304d001391855bc506d855ffee4dca8a8e8ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:46:59.811581    3771 start.go:365] acquiring machines lock for force-systemd-env-863000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:46:59.811611    3771 start.go:369] acquired machines lock for "force-systemd-env-863000" in 24.25µs
	I0919 09:46:59.811623    3771 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-863000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:46:59.811655    3771 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:46:59.816053    3771 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:46:59.830999    3771 start.go:159] libmachine.API.Create for "force-systemd-env-863000" (driver="qemu2")
	I0919 09:46:59.831029    3771 client.go:168] LocalClient.Create starting
	I0919 09:46:59.831106    3771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:46:59.831131    3771 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:59.831138    3771 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:59.831177    3771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:46:59.831195    3771 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:59.831200    3771 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:59.831512    3771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:46:59.953413    3771 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:00.009690    3771 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:00.009699    3771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:00.009854    3771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:00.018818    3771 main.go:141] libmachine: STDOUT: 
	I0919 09:47:00.018858    3771 main.go:141] libmachine: STDERR: 
	I0919 09:47:00.019208    3771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2 +20000M
	I0919 09:47:00.027976    3771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:00.028001    3771 main.go:141] libmachine: STDERR: 
	I0919 09:47:00.028017    3771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:00.028022    3771 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:00.028065    3771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:e7:35:7f:6f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:00.029736    3771 main.go:141] libmachine: STDOUT: 
	I0919 09:47:00.029753    3771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:00.029772    3771 client.go:171] LocalClient.Create took 198.742417ms
	I0919 09:47:02.031935    3771 start.go:128] duration metric: createHost completed in 2.220284s
	I0919 09:47:02.032004    3771 start.go:83] releasing machines lock for "force-systemd-env-863000", held for 2.22042125s
	W0919 09:47:02.032080    3771 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:02.038290    3771 out.go:177] * Deleting "force-systemd-env-863000" in qemu2 ...
	W0919 09:47:02.060496    3771 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:02.060538    3771 start.go:703] Will try again in 5 seconds ...
	I0919 09:47:07.062757    3771 start.go:365] acquiring machines lock for force-systemd-env-863000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:47:07.063304    3771 start.go:369] acquired machines lock for "force-systemd-env-863000" in 377µs
	I0919 09:47:07.063458    3771 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-863000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-863000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:47:07.063754    3771 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:47:07.073373    3771 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 09:47:07.121175    3771 start.go:159] libmachine.API.Create for "force-systemd-env-863000" (driver="qemu2")
	I0919 09:47:07.121219    3771 client.go:168] LocalClient.Create starting
	I0919 09:47:07.121369    3771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:47:07.121431    3771 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:07.121448    3771 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:07.121536    3771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:47:07.121575    3771 main.go:141] libmachine: Decoding PEM data...
	I0919 09:47:07.121590    3771 main.go:141] libmachine: Parsing certificate...
	I0919 09:47:07.122512    3771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:47:07.253670    3771 main.go:141] libmachine: Creating SSH key...
	I0919 09:47:07.289551    3771 main.go:141] libmachine: Creating Disk image...
	I0919 09:47:07.289557    3771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:47:07.289684    3771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:07.298167    3771 main.go:141] libmachine: STDOUT: 
	I0919 09:47:07.298186    3771 main.go:141] libmachine: STDERR: 
	I0919 09:47:07.298244    3771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2 +20000M
	I0919 09:47:07.305397    3771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:47:07.305410    3771 main.go:141] libmachine: STDERR: 
	I0919 09:47:07.305441    3771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:07.305446    3771 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:47:07.305480    3771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:7a:b5:4b:7c:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/force-systemd-env-863000/disk.qcow2
	I0919 09:47:07.307022    3771 main.go:141] libmachine: STDOUT: 
	I0919 09:47:07.307035    3771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:47:07.307048    3771 client.go:171] LocalClient.Create took 185.826959ms
	I0919 09:47:09.309193    3771 start.go:128] duration metric: createHost completed in 2.245446958s
	I0919 09:47:09.309257    3771 start.go:83] releasing machines lock for "force-systemd-env-863000", held for 2.24596875s
	W0919 09:47:09.309623    3771 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-863000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-863000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:47:09.320244    3771 out.go:177] 
	W0919 09:47:09.324285    3771 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:47:09.324310    3771 out.go:239] * 
	* 
	W0919 09:47:09.326925    3771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:47:09.336150    3771 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-863000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-863000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-863000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.608833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-863000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-863000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-19 09:47:09.426443 -0700 PDT m=+811.882247251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-863000 -n force-systemd-env-863000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-863000 -n force-systemd-env-863000: exit status 7 (32.641792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-863000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-863000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-863000
--- FAIL: TestForceSystemdEnv (9.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-085000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-085000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-slmdx" [79297217-3bab-4127-a07d-ffce4f0fab67] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-slmdx" [79297217-3bab-4127-a07d-ffce4f0fab67] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006531459s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31619
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31619: Get "http://192.168.105.4:31619": dial tcp 192.168.105.4:31619: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-085000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-slmdx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-085000/192.168.105.4
Start Time:       Tue, 19 Sep 2023 09:38:05 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://58af91ee54b83ad2017f8c1a14f369afffcaecee974f2e653909e594b8bb1719
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 19 Sep 2023 09:38:18 -0700
Finished:     Tue, 19 Sep 2023 09:38:18 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jdrfq (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-jdrfq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  36s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-slmdx to functional-085000
Normal   Pulled     24s (x3 over 37s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    24s (x3 over 37s)  kubelet            Created container echoserver-arm
Normal   Started    24s (x3 over 36s)  kubelet            Started container echoserver-arm
Warning  BackOff    10s (x3 over 35s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-slmdx_default(79297217-3bab-4127-a07d-ffce4f0fab67)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-085000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-085000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.213.139
IPs:                      10.100.213.139
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31619/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-085000 -n functional-085000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1232754094/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh -- ls                                                                                          | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh sudo                                                                                           | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-085000 ssh findmnt                                                                                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| start     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-085000 --dry-run                                                                                       | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-085000                                                                                                 | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|           | -p functional-085000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:38:41
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:38:41.881970    2813 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:38:41.882094    2813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.882098    2813 out.go:309] Setting ErrFile to fd 2...
	I0919 09:38:41.882100    2813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.882223    2813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:38:41.883523    2813 out.go:303] Setting JSON to false
	I0919 09:38:41.899549    2813 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":495,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:38:41.899643    2813 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:38:41.903898    2813 out.go:177] * [functional-085000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0919 09:38:41.910856    2813 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:38:41.914882    2813 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:38:41.910954    2813 notify.go:220] Checking for updates...
	I0919 09:38:41.921838    2813 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:38:41.924884    2813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:38:41.927919    2813 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:38:41.930848    2813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:38:41.934095    2813 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:38:41.934362    2813 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:38:41.938891    2813 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0919 09:38:41.945878    2813 start.go:298] selected driver: qemu2
	I0919 09:38:41.945883    2813 start.go:902] validating driver "qemu2" against &{Name:functional-085000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-085000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:38:41.945930    2813 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:38:41.951714    2813 out.go:177] 
	W0919 09:38:41.955921    2813 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 09:38:41.959842    2813 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 16:35:43 UTC, ends at Tue 2023-09-19 16:38:42 UTC. --
	Sep 19 16:38:23 functional-085000 dockerd[6577]: time="2023-09-19T16:38:23.230807508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:23 functional-085000 dockerd[6577]: time="2023-09-19T16:38:23.230815591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:38:23 functional-085000 dockerd[6577]: time="2023-09-19T16:38:23.230820174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:23 functional-085000 cri-dockerd[6835]: time="2023-09-19T16:38:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3c07a1e8b2b3af7b015f92e9ee1f7cd18a1cca3a74a8d87093464e960660972d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 19 16:38:24 functional-085000 cri-dockerd[6835]: time="2023-09-19T16:38:24Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.528327799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.528355091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.528367257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.528373674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:24 functional-085000 dockerd[6571]: time="2023-09-19T16:38:24.581640196Z" level=info msg="ignoring event" container=a8d45397616233a630cd514dc322c9ef2a1f460bec3d291a0430f83252163f41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.581724278Z" level=info msg="shim disconnected" id=a8d45397616233a630cd514dc322c9ef2a1f460bec3d291a0430f83252163f41 namespace=moby
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.581749735Z" level=warning msg="cleaning up after shim disconnected" id=a8d45397616233a630cd514dc322c9ef2a1f460bec3d291a0430f83252163f41 namespace=moby
	Sep 19 16:38:24 functional-085000 dockerd[6577]: time="2023-09-19T16:38:24.581753902Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:38:26 functional-085000 dockerd[6571]: time="2023-09-19T16:38:26.250429456Z" level=info msg="ignoring event" container=3c07a1e8b2b3af7b015f92e9ee1f7cd18a1cca3a74a8d87093464e960660972d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:38:26 functional-085000 dockerd[6577]: time="2023-09-19T16:38:26.250512663Z" level=info msg="shim disconnected" id=3c07a1e8b2b3af7b015f92e9ee1f7cd18a1cca3a74a8d87093464e960660972d namespace=moby
	Sep 19 16:38:26 functional-085000 dockerd[6577]: time="2023-09-19T16:38:26.250546954Z" level=warning msg="cleaning up after shim disconnected" id=3c07a1e8b2b3af7b015f92e9ee1f7cd18a1cca3a74a8d87093464e960660972d namespace=moby
	Sep 19 16:38:26 functional-085000 dockerd[6577]: time="2023-09-19T16:38:26.250550787Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.686748854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.686875187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.686891103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.686902228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:38:31 functional-085000 dockerd[6571]: time="2023-09-19T16:38:31.720924079Z" level=info msg="ignoring event" container=2d0fbb97184050d73a9484342d70ee77efdba010cc5b9c28d6c6622349859fc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.720996287Z" level=info msg="shim disconnected" id=2d0fbb97184050d73a9484342d70ee77efdba010cc5b9c28d6c6622349859fc8 namespace=moby
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.721020286Z" level=warning msg="cleaning up after shim disconnected" id=2d0fbb97184050d73a9484342d70ee77efdba010cc5b9c28d6c6622349859fc8 namespace=moby
	Sep 19 16:38:31 functional-085000 dockerd[6577]: time="2023-09-19T16:38:31.721024078Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d0fbb9718405       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            3                   6b4ab65fe34cd       hello-node-759d89bdcc-tc97p
	a8d4539761623       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 seconds ago       Exited              mount-munger              0                   3c07a1e8b2b3a       busybox-mount
	58af91ee54b83       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   5d8c95b3655a3       hello-node-connect-7799dfb7c6-slmdx
	42e0c4ceca5cd       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                         27 seconds ago       Running             myfrontend                0                   50ed44b4af6b2       sp-pod
	a730e27b05577       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                         44 seconds ago       Running             nginx                     0                   1f456efbd89cb       nginx-svc
	b726153604fdf       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   fb4f87fccbc5d       coredns-5dd5756b68-4blv9
	59bf943ac4611       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   a19a8f31dec23       storage-provisioner
	cffcc8812203a       7da62c127fc0f                                                                                         About a minute ago   Running             kube-proxy                2                   9aea59c01b74e       kube-proxy-4vj5h
	89be0de63af33       89d57b83c1786                                                                                         About a minute ago   Running             kube-controller-manager   2                   33d9e63caea3d       kube-controller-manager-functional-085000
	6a9b55ea8a8af       64fc40cee3716                                                                                         About a minute ago   Running             kube-scheduler            2                   2504fad32c51c       kube-scheduler-functional-085000
	6a35f4b09b309       30bb499447fe1                                                                                         About a minute ago   Running             kube-apiserver            0                   725d0bb0e5d35       kube-apiserver-functional-085000
	96d5b11c95a79       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   35b094abeb18f       etcd-functional-085000
	02e4027afc741       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       0                   028e2f7a13aa8       storage-provisioner
	2574600891a5a       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   078860dc9bd8f       coredns-5dd5756b68-4blv9
	eb937cac6350f       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   433ae42e2c5d7       etcd-functional-085000
	5b87c5488c959       7da62c127fc0f                                                                                         2 minutes ago        Exited              kube-proxy                1                   c1b411bc6d8d5       kube-proxy-4vj5h
	2a6734eff7be4       64fc40cee3716                                                                                         2 minutes ago        Exited              kube-scheduler            1                   25572ceeb2135       kube-scheduler-functional-085000
	66cf02a298128       89d57b83c1786                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   b80dc7a17bc9c       kube-controller-manager-functional-085000
	
	* 
	* ==> coredns [2574600891a5] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40989 - 50060 "HINFO IN 6559894277069968974.1267737148511562122. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008799983s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b726153604fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44782 - 32827 "HINFO IN 8266865685051364732.6045979300107408312. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004150266s
	[INFO] 10.244.0.1:3312 - 55022 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000129906s
	[INFO] 10.244.0.1:51756 - 8302 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00009745s
	[INFO] 10.244.0.1:11742 - 45959 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000032288s
	[INFO] 10.244.0.1:62726 - 11087 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001046783s
	[INFO] 10.244.0.1:65114 - 63750 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00005887s
	[INFO] 10.244.0.1:4955 - 8070 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000019874s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-085000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-085000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=functional-085000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T09_36_01_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:35:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-085000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:38:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:38:19 +0000   Tue, 19 Sep 2023 16:35:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:38:19 +0000   Tue, 19 Sep 2023 16:35:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:38:19 +0000   Tue, 19 Sep 2023 16:35:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:38:19 +0000   Tue, 19 Sep 2023 16:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-085000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 c57d989cb60e4f54aa82d9204745df97
	  System UUID:                c57d989cb60e4f54aa82d9204745df97
	  Boot ID:                    251c32e0-1d2f-4bbd-8ec0-b526f7342d41
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-tc97p                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  default                     hello-node-connect-7799dfb7c6-slmdx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 coredns-5dd5756b68-4blv9                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m28s
	  kube-system                 etcd-functional-085000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-apiserver-functional-085000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-functional-085000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-proxy-4vj5h                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-functional-085000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-9gswr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-tw58g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  Starting                 84s                    kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node functional-085000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node functional-085000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m46s (x7 over 2m46s)  kubelet          Node functional-085000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m41s                  kubelet          Node functional-085000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m41s                  kubelet          Node functional-085000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s                  kubelet          Node functional-085000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m41s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m38s                  kubelet          Node functional-085000 status is now: NodeReady
	  Normal  RegisteredNode           2m29s                  node-controller  Node functional-085000 event: Registered Node functional-085000 in Controller
	  Normal  NodeNotReady             2m21s                  kubelet          Node functional-085000 status is now: NodeNotReady
	  Normal  RegisteredNode           115s                   node-controller  Node functional-085000 event: Registered Node functional-085000 in Controller
	  Normal  Starting                 88s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)      kubelet          Node functional-085000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)      kubelet          Node functional-085000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)      kubelet          Node functional-085000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                    node-controller  Node functional-085000 event: Registered Node functional-085000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.756019] systemd-fstab-generator[3731]: Ignoring "noauto" for root device
	[  +0.151216] systemd-fstab-generator[3764]: Ignoring "noauto" for root device
	[  +0.077215] systemd-fstab-generator[3775]: Ignoring "noauto" for root device
	[  +0.086300] systemd-fstab-generator[3788]: Ignoring "noauto" for root device
	[  +4.990465] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.297238] systemd-fstab-generator[4287]: Ignoring "noauto" for root device
	[  +0.066771] systemd-fstab-generator[4298]: Ignoring "noauto" for root device
	[  +0.087399] systemd-fstab-generator[4309]: Ignoring "noauto" for root device
	[  +0.070243] systemd-fstab-generator[4320]: Ignoring "noauto" for root device
	[  +0.088334] systemd-fstab-generator[4392]: Ignoring "noauto" for root device
	[  +6.456697] kauditd_printk_skb: 34 callbacks suppressed
	[Sep19 16:37] systemd-fstab-generator[6112]: Ignoring "noauto" for root device
	[  +0.131739] systemd-fstab-generator[6146]: Ignoring "noauto" for root device
	[  +0.083552] systemd-fstab-generator[6157]: Ignoring "noauto" for root device
	[  +0.093494] systemd-fstab-generator[6170]: Ignoring "noauto" for root device
	[ +11.418892] systemd-fstab-generator[6722]: Ignoring "noauto" for root device
	[  +0.065284] systemd-fstab-generator[6733]: Ignoring "noauto" for root device
	[  +0.081602] systemd-fstab-generator[6744]: Ignoring "noauto" for root device
	[  +0.065255] systemd-fstab-generator[6755]: Ignoring "noauto" for root device
	[  +0.086878] systemd-fstab-generator[6828]: Ignoring "noauto" for root device
	[  +0.991828] systemd-fstab-generator[7083]: Ignoring "noauto" for root device
	[  +3.807072] kauditd_printk_skb: 34 callbacks suppressed
	[ +26.154688] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.007078] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep19 16:38] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [96d5b11c95a7] <==
	* {"level":"info","ts":"2023-09-19T16:37:15.341806Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T16:37:15.341849Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-19T16:37:15.341973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-19T16:37:15.341996Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-19T16:37:15.342029Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:37:15.34204Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:37:15.342544Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T16:37:15.342616Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T16:37:15.342627Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T16:37:15.342706Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-19T16:37:15.34271Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-19T16:37:16.639374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-19T16:37:16.639467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-19T16:37:16.639544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-19T16:37:16.639604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-19T16:37:16.639633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-19T16:37:16.639649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-19T16:37:16.639661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-19T16:37:16.641206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:37:16.641474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:37:16.642542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T16:37:16.641208Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-085000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T16:37:16.642906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-19T16:37:16.643153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T16:37:16.643186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [eb937cac6350] <==
	* {"level":"info","ts":"2023-09-19T16:36:34.037068Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-19T16:36:35.12222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-19T16:36:35.122328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-19T16:36:35.12239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-19T16:36:35.122424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-19T16:36:35.122439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-19T16:36:35.122472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-19T16:36:35.122505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-19T16:36:35.124822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:36:35.125212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:36:35.127725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T16:36:35.128Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T16:36:35.128135Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T16:36:35.128525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-19T16:36:35.12484Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-085000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T16:37:01.918117Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-19T16:37:01.918163Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-085000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-19T16:37:01.918241Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T16:37:01.918286Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T16:37:01.924315Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T16:37:01.924337Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-19T16:37:01.924356Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-19T16:37:01.925713Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-19T16:37:01.925738Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-19T16:37:01.925742Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-085000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  16:38:42 up 3 min,  0 users,  load average: 0.75, 0.39, 0.16
	Linux functional-085000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6a35f4b09b30] <==
	* I0919 16:37:17.297812       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0919 16:37:17.300032       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 16:37:17.303945       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 16:37:17.303960       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 16:37:17.303986       1 aggregator.go:166] initial CRD sync complete...
	I0919 16:37:17.304020       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 16:37:17.304047       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 16:37:17.304066       1 cache.go:39] Caches are synced for autoregister controller
	I0919 16:37:17.307153       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 16:37:18.200081       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 16:37:18.824821       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 16:37:18.828019       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 16:37:18.838990       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 16:37:18.846801       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:37:18.848960       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 16:37:29.393543       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:37:29.445979       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 16:37:38.694725       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.94.252"}
	I0919 16:37:44.445306       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0919 16:37:44.500113       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.35.231"}
	I0919 16:37:54.892624       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.213.14"}
	I0919 16:38:05.325723       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.213.139"}
	I0919 16:38:42.551878       1 controller.go:624] quota admission added evaluator for: namespaces
	I0919 16:38:42.660556       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.250.161"}
	I0919 16:38:42.681367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.71.216"}
	
	* 
	* ==> kube-controller-manager [66cf02a29812] <==
	* I0919 16:36:47.863589       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0919 16:36:47.863591       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0919 16:36:47.863618       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0919 16:36:47.864624       1 shared_informer.go:318] Caches are synced for crt configmap
	I0919 16:36:47.873933       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0919 16:36:47.873939       1 shared_informer.go:318] Caches are synced for PV protection
	I0919 16:36:47.875026       1 shared_informer.go:318] Caches are synced for cronjob
	I0919 16:36:47.876109       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0919 16:36:47.876120       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0919 16:36:47.877245       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0919 16:36:47.877316       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0919 16:36:47.878422       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0919 16:36:47.879579       1 shared_informer.go:318] Caches are synced for ephemeral
	I0919 16:36:47.880649       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0919 16:36:47.880695       1 shared_informer.go:318] Caches are synced for GC
	I0919 16:36:47.882672       1 shared_informer.go:318] Caches are synced for persistent volume
	I0919 16:36:47.935365       1 shared_informer.go:318] Caches are synced for daemon sets
	I0919 16:36:47.958411       1 shared_informer.go:318] Caches are synced for disruption
	I0919 16:36:47.981998       1 shared_informer.go:318] Caches are synced for stateful set
	I0919 16:36:48.034449       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 16:36:48.044984       1 shared_informer.go:318] Caches are synced for HPA
	I0919 16:36:48.084504       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 16:36:48.397836       1 shared_informer.go:318] Caches are synced for garbage collector
	I0919 16:36:48.466287       1 shared_informer.go:318] Caches are synced for garbage collector
	I0919 16:36:48.466300       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [89be0de63af3] <==
	* E0919 16:38:42.591950       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.592330       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0919 16:38:42.595809       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0919 16:38:42.595822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.78163ms"
	E0919 16:38:42.595828       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.599413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.218287ms"
	E0919 16:38:42.599425       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.600305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.845201ms"
	E0919 16:38:42.600313       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.600339       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0919 16:38:42.605297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.84228ms"
	E0919 16:38:42.605350       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.605340       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0919 16:38:42.607208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.839689ms"
	E0919 16:38:42.607219       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0919 16:38:42.607232       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0919 16:38:42.618397       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-tw58g"
	I0919 16:38:42.627178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.160386ms"
	I0919 16:38:42.627471       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-9gswr"
	I0919 16:38:42.630486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="8.140641ms"
	I0919 16:38:42.642745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.237066ms"
	I0919 16:38:42.642768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="10µs"
	I0919 16:38:42.642841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.640608ms"
	I0919 16:38:42.642868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.292µs"
	I0919 16:38:42.650592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="30.334µs"
	
	* 
	* ==> kube-proxy [5b87c5488c95] <==
	* I0919 16:36:34.639203       1 server_others.go:69] "Using iptables proxy"
	I0919 16:36:35.759027       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0919 16:36:35.775860       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 16:36:35.775891       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 16:36:35.777478       1 server_others.go:152] "Using iptables Proxier"
	I0919 16:36:35.777520       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 16:36:35.777606       1 server.go:846] "Version info" version="v1.28.2"
	I0919 16:36:35.777677       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:36:35.778320       1 config.go:188] "Starting service config controller"
	I0919 16:36:35.778341       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 16:36:35.778355       1 config.go:97] "Starting endpoint slice config controller"
	I0919 16:36:35.778360       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 16:36:35.778570       1 config.go:315] "Starting node config controller"
	I0919 16:36:35.778581       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 16:36:35.880216       1 shared_informer.go:318] Caches are synced for node config
	I0919 16:36:35.880216       1 shared_informer.go:318] Caches are synced for service config
	I0919 16:36:35.880226       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [cffcc8812203] <==
	* I0919 16:37:18.178834       1 server_others.go:69] "Using iptables proxy"
	I0919 16:37:18.183799       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0919 16:37:18.231024       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 16:37:18.232976       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 16:37:18.236520       1 server_others.go:152] "Using iptables Proxier"
	I0919 16:37:18.236565       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 16:37:18.236655       1 server.go:846] "Version info" version="v1.28.2"
	I0919 16:37:18.236724       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:37:18.237031       1 config.go:188] "Starting service config controller"
	I0919 16:37:18.237053       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 16:37:18.237082       1 config.go:97] "Starting endpoint slice config controller"
	I0919 16:37:18.237096       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 16:37:18.237327       1 config.go:315] "Starting node config controller"
	I0919 16:37:18.237352       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 16:37:18.337726       1 shared_informer.go:318] Caches are synced for service config
	I0919 16:37:18.337730       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 16:37:18.337734       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2a6734eff7be] <==
	* I0919 16:36:34.505302       1 serving.go:348] Generated self-signed cert in-memory
	W0919 16:36:35.721838       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 16:36:35.721867       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 16:36:35.721872       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 16:36:35.721875       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 16:36:35.747326       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 16:36:35.747412       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:36:35.748451       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 16:36:35.748506       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 16:36:35.750883       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:36:35.748517       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 16:36:35.851499       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:37:01.904844       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0919 16:37:01.905075       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0919 16:37:01.905144       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [6a9b55ea8a8a] <==
	* I0919 16:37:15.839556       1 serving.go:348] Generated self-signed cert in-memory
	W0919 16:37:17.235076       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 16:37:17.235117       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 16:37:17.235132       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 16:37:17.235140       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 16:37:17.261913       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 16:37:17.261964       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:37:17.262925       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 16:37:17.262995       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 16:37:17.263030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:37:17.263051       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 16:37:17.363374       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:35:43 UTC, ends at Tue 2023-09-19 16:38:43 UTC. --
	Sep 19 16:38:19 functional-085000 kubelet[7089]: E0919 16:38:19.167301    7089 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-slmdx_default(79297217-3bab-4127-a07d-ffce4f0fab67)\"" pod="default/hello-node-connect-7799dfb7c6-slmdx" podUID="79297217-3bab-4127-a07d-ffce4f0fab67"
	Sep 19 16:38:22 functional-085000 kubelet[7089]: I0919 16:38:22.881646    7089 topology_manager.go:215] "Topology Admit Handler" podUID="4dae8ecf-0197-445d-acb1-603b28421fb4" podNamespace="default" podName="busybox-mount"
	Sep 19 16:38:23 functional-085000 kubelet[7089]: I0919 16:38:23.059635    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fdk6\" (UniqueName: \"kubernetes.io/projected/4dae8ecf-0197-445d-acb1-603b28421fb4-kube-api-access-7fdk6\") pod \"busybox-mount\" (UID: \"4dae8ecf-0197-445d-acb1-603b28421fb4\") " pod="default/busybox-mount"
	Sep 19 16:38:23 functional-085000 kubelet[7089]: I0919 16:38:23.059672    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dae8ecf-0197-445d-acb1-603b28421fb4-test-volume\") pod \"busybox-mount\" (UID: \"4dae8ecf-0197-445d-acb1-603b28421fb4\") " pod="default/busybox-mount"
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.381812    7089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dae8ecf-0197-445d-acb1-603b28421fb4-test-volume\") pod \"4dae8ecf-0197-445d-acb1-603b28421fb4\" (UID: \"4dae8ecf-0197-445d-acb1-603b28421fb4\") "
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.381839    7089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fdk6\" (UniqueName: \"kubernetes.io/projected/4dae8ecf-0197-445d-acb1-603b28421fb4-kube-api-access-7fdk6\") pod \"4dae8ecf-0197-445d-acb1-603b28421fb4\" (UID: \"4dae8ecf-0197-445d-acb1-603b28421fb4\") "
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.381999    7089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dae8ecf-0197-445d-acb1-603b28421fb4-test-volume" (OuterVolumeSpecName: "test-volume") pod "4dae8ecf-0197-445d-acb1-603b28421fb4" (UID: "4dae8ecf-0197-445d-acb1-603b28421fb4"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.382810    7089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dae8ecf-0197-445d-acb1-603b28421fb4-kube-api-access-7fdk6" (OuterVolumeSpecName: "kube-api-access-7fdk6") pod "4dae8ecf-0197-445d-acb1-603b28421fb4" (UID: "4dae8ecf-0197-445d-acb1-603b28421fb4"). InnerVolumeSpecName "kube-api-access-7fdk6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.481895    7089 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dae8ecf-0197-445d-acb1-603b28421fb4-test-volume\") on node \"functional-085000\" DevicePath \"\""
	Sep 19 16:38:26 functional-085000 kubelet[7089]: I0919 16:38:26.481909    7089 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7fdk6\" (UniqueName: \"kubernetes.io/projected/4dae8ecf-0197-445d-acb1-603b28421fb4-kube-api-access-7fdk6\") on node \"functional-085000\" DevicePath \"\""
	Sep 19 16:38:27 functional-085000 kubelet[7089]: I0919 16:38:27.211498    7089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c07a1e8b2b3af7b015f92e9ee1f7cd18a1cca3a74a8d87093464e960660972d"
	Sep 19 16:38:31 functional-085000 kubelet[7089]: I0919 16:38:31.665577    7089 scope.go:117] "RemoveContainer" containerID="8e5ec3b719c09d2f19a80ab02f6e71dec9f0cc38d19341efd79943b0f9504196"
	Sep 19 16:38:32 functional-085000 kubelet[7089]: I0919 16:38:32.240664    7089 scope.go:117] "RemoveContainer" containerID="8e5ec3b719c09d2f19a80ab02f6e71dec9f0cc38d19341efd79943b0f9504196"
	Sep 19 16:38:32 functional-085000 kubelet[7089]: I0919 16:38:32.240809    7089 scope.go:117] "RemoveContainer" containerID="2d0fbb97184050d73a9484342d70ee77efdba010cc5b9c28d6c6622349859fc8"
	Sep 19 16:38:32 functional-085000 kubelet[7089]: E0919 16:38:32.240908    7089 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-tc97p_default(eea8b835-cda1-4aac-b659-533c9aed100d)\"" pod="default/hello-node-759d89bdcc-tc97p" podUID="eea8b835-cda1-4aac-b659-533c9aed100d"
	Sep 19 16:38:32 functional-085000 kubelet[7089]: I0919 16:38:32.664622    7089 scope.go:117] "RemoveContainer" containerID="58af91ee54b83ad2017f8c1a14f369afffcaecee974f2e653909e594b8bb1719"
	Sep 19 16:38:32 functional-085000 kubelet[7089]: E0919 16:38:32.664747    7089 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-slmdx_default(79297217-3bab-4127-a07d-ffce4f0fab67)\"" pod="default/hello-node-connect-7799dfb7c6-slmdx" podUID="79297217-3bab-4127-a07d-ffce4f0fab67"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.620937    7089 topology_manager.go:215] "Topology Admit Handler" podUID="23570a23-70b6-4c90-9df4-8766d0d9c0ce" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-tw58g"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: E0919 16:38:42.620971    7089 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dae8ecf-0197-445d-acb1-603b28421fb4" containerName="mount-munger"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.620989    7089 memory_manager.go:346] "RemoveStaleState removing state" podUID="4dae8ecf-0197-445d-acb1-603b28421fb4" containerName="mount-munger"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.633768    7089 topology_manager.go:215] "Topology Admit Handler" podUID="52c548f7-57d7-4eb5-bf38-cc1d6175ff85" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-9gswr"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.670922    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpklt\" (UniqueName: \"kubernetes.io/projected/52c548f7-57d7-4eb5-bf38-cc1d6175ff85-kube-api-access-rpklt\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-9gswr\" (UID: \"52c548f7-57d7-4eb5-bf38-cc1d6175ff85\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-9gswr"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.670939    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52c548f7-57d7-4eb5-bf38-cc1d6175ff85-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-9gswr\" (UID: \"52c548f7-57d7-4eb5-bf38-cc1d6175ff85\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-9gswr"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.670950    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/23570a23-70b6-4c90-9df4-8766d0d9c0ce-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-tw58g\" (UID: \"23570a23-70b6-4c90-9df4-8766d0d9c0ce\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tw58g"
	Sep 19 16:38:42 functional-085000 kubelet[7089]: I0919 16:38:42.670960    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45kv\" (UniqueName: \"kubernetes.io/projected/23570a23-70b6-4c90-9df4-8766d0d9c0ce-kube-api-access-w45kv\") pod \"kubernetes-dashboard-8694d4445c-tw58g\" (UID: \"23570a23-70b6-4c90-9df4-8766d0d9c0ce\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tw58g"
	
	* 
	* ==> storage-provisioner [02e4027afc74] <==
	* I0919 16:36:38.157956       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 16:36:38.162367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 16:36:38.162387       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 16:36:38.165086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 16:36:38.165217       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3add266a-16c0-4fb1-b0aa-65db677b4734", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-085000_04537cdc-8e8a-4a80-974c-927fa48eadfb became leader
	I0919 16:36:38.165232       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-085000_04537cdc-8e8a-4a80-974c-927fa48eadfb!
	I0919 16:36:38.267464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-085000_04537cdc-8e8a-4a80-974c-927fa48eadfb!
	
	* 
	* ==> storage-provisioner [59bf943ac461] <==
	* I0919 16:37:18.260150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 16:37:18.265031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 16:37:18.265665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 16:37:35.652651       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 16:37:35.652712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-085000_8779fec9-7e95-4eda-a144-2d07df1bb920!
	I0919 16:37:35.652738       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3add266a-16c0-4fb1-b0aa-65db677b4734", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-085000_8779fec9-7e95-4eda-a144-2d07df1bb920 became leader
	I0919 16:37:35.753246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-085000_8779fec9-7e95-4eda-a144-2d07df1bb920!
	I0919 16:38:02.154637       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0919 16:38:02.154980       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3b00b61d-47ad-42ee-b151-7e394e7e3835", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0919 16:38:02.154690       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    513286d3-35a3-421b-859f-b864b6cf063f 387 0 2023-09-19 16:36:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-19 16:36:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3b00b61d-47ad-42ee-b151-7e394e7e3835 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3b00b61d-47ad-42ee-b151-7e394e7e3835 702 0 2023-09-19 16:38:02 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-19 16:38:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-19 16:38:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0919 16:38:02.155340       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3b00b61d-47ad-42ee-b151-7e394e7e3835" provisioned
	I0919 16:38:02.155380       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0919 16:38:02.155386       1 volume_store.go:212] Trying to save persistentvolume "pvc-3b00b61d-47ad-42ee-b151-7e394e7e3835"
	I0919 16:38:02.160327       1 volume_store.go:219] persistentvolume "pvc-3b00b61d-47ad-42ee-b151-7e394e7e3835" saved
	I0919 16:38:02.160946       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3b00b61d-47ad-42ee-b151-7e394e7e3835", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3b00b61d-47ad-42ee-b151-7e394e7e3835
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-085000 -n functional-085000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-085000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-9gswr kubernetes-dashboard-8694d4445c-tw58g
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-085000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-9gswr kubernetes-dashboard-8694d4445c-tw58g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-085000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-9gswr kubernetes-dashboard-8694d4445c-tw58g: exit status 1 (41.395417ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-085000/192.168.105.4
	Start Time:       Tue, 19 Sep 2023 09:38:22 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a8d45397616233a630cd514dc322c9ef2a1f460bec3d291a0430f83252163f41
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 19 Sep 2023 09:38:24 -0700
	      Finished:     Tue, 19 Sep 2023 09:38:24 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fdk6 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7fdk6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  20s   default-scheduler  Successfully assigned default/busybox-mount to functional-085000
	  Normal  Pulling    20s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.142s (1.142s including waiting)
	  Normal  Created    19s   kubelet            Created container mount-munger
	  Normal  Started    19s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-9gswr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-tw58g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-085000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-9gswr kubernetes-dashboard-8694d4445c-tw58g: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (38.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0919 09:37:54.525755    2638 out.go:296] Setting OutFile to fd 1 ...
I0919 09:37:54.525967    2638 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:37:54.525970    2638 out.go:309] Setting ErrFile to fd 2...
I0919 09:37:54.525973    2638 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:37:54.526114    2638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:37:54.526372    2638 mustload.go:65] Loading cluster: functional-085000
I0919 09:37:54.526568    2638 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:37:54.531125    2638 out.go:177] 
W0919 09:37:54.534151    2638 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/monitor: connect: connection refused
W0919 09:37:54.534158    2638 out.go:239] * 
* 
W0919 09:37:54.535531    2638 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 09:37:54.538103    2638 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2637: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-964000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-964000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in c23168a9bde0
	Removing intermediate container c23168a9bde0
	 ---> 06a6540e2e56
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 4c5d787079f8
	Removing intermediate container 4c5d787079f8
	 ---> 82297ddb63a4
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in d4d3b1901b0e
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-964000 -n image-964000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-964000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh findmnt            | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| start          | -p functional-085000                     | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-085000 --dry-run           | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-085000                     | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | -p functional-085000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-085000 ssh pgrep              | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-085000 image build -t         | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | localhost/my-image:functional-085000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-085000 image ls               | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	| image          | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| delete         | -p functional-085000                     | functional-085000 | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	| start          | -p image-964000 --driver=qemu2           | image-964000      | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:39 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000      | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-964000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000      | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-964000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:38:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:38:55.681891    2869 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:38:55.682027    2869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:55.682029    2869 out.go:309] Setting ErrFile to fd 2...
	I0919 09:38:55.682031    2869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:55.682155    2869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:38:55.683190    2869 out.go:303] Setting JSON to false
	I0919 09:38:55.698593    2869 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":509,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:38:55.698686    2869 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:38:55.702581    2869 out.go:177] * [image-964000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:38:55.709586    2869 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:38:55.709620    2869 notify.go:220] Checking for updates...
	I0919 09:38:55.710927    2869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:38:55.713528    2869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:38:55.716500    2869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:38:55.719494    2869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:38:55.722456    2869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:38:55.725740    2869 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:38:55.729510    2869 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:38:55.736459    2869 start.go:298] selected driver: qemu2
	I0919 09:38:55.736463    2869 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:38:55.736468    2869 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:38:55.736528    2869 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:38:55.739522    2869 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:38:55.744479    2869 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 09:38:55.744581    2869 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:38:55.744597    2869 cni.go:84] Creating CNI manager for ""
	I0919 09:38:55.744603    2869 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:38:55.744606    2869 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:38:55.744611    2869 start_flags.go:321] config:
	{Name:image-964000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:image-964000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:38:55.748808    2869 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:38:55.755337    2869 out.go:177] * Starting control plane node image-964000 in cluster image-964000
	I0919 09:38:55.758510    2869 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:38:55.758532    2869 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:38:55.758537    2869 cache.go:57] Caching tarball of preloaded images
	I0919 09:38:55.758589    2869 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:38:55.758593    2869 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:38:55.758786    2869 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/config.json ...
	I0919 09:38:55.758797    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/config.json: {Name:mk275d7afaa70c187ffa68badebed8b42014b564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:38:55.759020    2869 start.go:365] acquiring machines lock for image-964000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:38:55.759055    2869 start.go:369] acquired machines lock for "image-964000" in 30.708µs
	I0919 09:38:55.759067    2869 start.go:93] Provisioning new machine with config: &{Name:image-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:image-964000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:38:55.759091    2869 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:38:55.766512    2869 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0919 09:38:55.788910    2869 start.go:159] libmachine.API.Create for "image-964000" (driver="qemu2")
	I0919 09:38:55.788931    2869 client.go:168] LocalClient.Create starting
	I0919 09:38:55.788995    2869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:38:55.789017    2869 main.go:141] libmachine: Decoding PEM data...
	I0919 09:38:55.789032    2869 main.go:141] libmachine: Parsing certificate...
	I0919 09:38:55.789064    2869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:38:55.789080    2869 main.go:141] libmachine: Decoding PEM data...
	I0919 09:38:55.789087    2869 main.go:141] libmachine: Parsing certificate...
	I0919 09:38:55.789385    2869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:38:55.906825    2869 main.go:141] libmachine: Creating SSH key...
	I0919 09:38:55.967829    2869 main.go:141] libmachine: Creating Disk image...
	I0919 09:38:55.967832    2869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:38:55.967967    2869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2
	I0919 09:38:55.985198    2869 main.go:141] libmachine: STDOUT: 
	I0919 09:38:55.985210    2869 main.go:141] libmachine: STDERR: 
	I0919 09:38:55.985257    2869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2 +20000M
	I0919 09:38:55.992355    2869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:38:55.992365    2869 main.go:141] libmachine: STDERR: 
	I0919 09:38:55.992391    2869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2
	I0919 09:38:55.992396    2869 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:38:55.992434    2869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fa:3c:88:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/disk.qcow2
	I0919 09:38:56.035587    2869 main.go:141] libmachine: STDOUT: 
	I0919 09:38:56.035619    2869 main.go:141] libmachine: STDERR: 
	I0919 09:38:56.035622    2869 main.go:141] libmachine: Attempt 0
	I0919 09:38:56.035634    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:38:56.035700    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:38:56.035718    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:38:56.035726    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:38:56.035730    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:38:58.037910    2869 main.go:141] libmachine: Attempt 1
	I0919 09:38:58.037954    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:38:58.038279    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:38:58.038324    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:38:58.038350    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:38:58.038379    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:00.040524    2869 main.go:141] libmachine: Attempt 2
	I0919 09:39:00.040539    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:39:00.040649    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:39:00.040660    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:00.040673    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:00.040678    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:02.042777    2869 main.go:141] libmachine: Attempt 3
	I0919 09:39:02.042798    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:39:02.042891    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:39:02.042902    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:02.042922    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:02.042926    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:04.044941    2869 main.go:141] libmachine: Attempt 4
	I0919 09:39:04.044944    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:39:04.044975    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:39:04.044979    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:04.044984    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:04.044988    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:06.046543    2869 main.go:141] libmachine: Attempt 5
	I0919 09:39:06.046552    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:39:06.046636    2869 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0919 09:39:06.046645    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:06.046649    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:06.046653    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:08.048773    2869 main.go:141] libmachine: Attempt 6
	I0919 09:39:08.048812    2869 main.go:141] libmachine: Searching for ee:fa:3c:88:3c:79 in /var/db/dhcpd_leases ...
	I0919 09:39:08.049158    2869 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:08.049195    2869 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:08.049207    2869 main.go:141] libmachine: Found match: ee:fa:3c:88:3c:79
	I0919 09:39:08.049241    2869 main.go:141] libmachine: IP: 192.168.105.5
	I0919 09:39:08.049256    2869 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0919 09:39:10.070902    2869 machine.go:88] provisioning docker machine ...
	I0919 09:39:10.070959    2869 buildroot.go:166] provisioning hostname "image-964000"
	I0919 09:39:10.071121    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:10.071972    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:10.071989    2869 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-964000 && echo "image-964000" | sudo tee /etc/hostname
	I0919 09:39:10.168701    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: image-964000
	
	I0919 09:39:10.168826    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:10.169330    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:10.169342    2869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-964000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-964000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-964000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 09:39:10.246417    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 09:39:10.246433    2869 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17240-943/.minikube CaCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17240-943/.minikube}
	I0919 09:39:10.246448    2869 buildroot.go:174] setting up certificates
	I0919 09:39:10.246458    2869 provision.go:83] configureAuth start
	I0919 09:39:10.246464    2869 provision.go:138] copyHostCerts
	I0919 09:39:10.246585    2869 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem, removing ...
	I0919 09:39:10.246592    2869 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem
	I0919 09:39:10.246781    2869 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem (1082 bytes)
	I0919 09:39:10.247020    2869 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem, removing ...
	I0919 09:39:10.247023    2869 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem
	I0919 09:39:10.247079    2869 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem (1123 bytes)
	I0919 09:39:10.247214    2869 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem, removing ...
	I0919 09:39:10.247216    2869 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem
	I0919 09:39:10.247277    2869 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem (1679 bytes)
	I0919 09:39:10.247383    2869 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem org=jenkins.image-964000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-964000]
	I0919 09:39:10.427498    2869 provision.go:172] copyRemoteCerts
	I0919 09:39:10.427546    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 09:39:10.427559    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:10.463592    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 09:39:10.471178    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 09:39:10.478341    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0919 09:39:10.485081    2869 provision.go:86] duration metric: configureAuth took 238.620584ms
	I0919 09:39:10.485086    2869 buildroot.go:189] setting minikube options for container-runtime
	I0919 09:39:10.485177    2869 config.go:182] Loaded profile config "image-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:39:10.485209    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:10.485421    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:10.485424    2869 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 09:39:10.549881    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 09:39:10.549890    2869 buildroot.go:70] root file system type: tmpfs
	I0919 09:39:10.549947    2869 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 09:39:10.549997    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:10.550247    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:10.550281    2869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 09:39:10.618126    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 09:39:10.618179    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:10.618433    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:10.618441    2869 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 09:39:10.956622    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 09:39:10.956631    2869 machine.go:91] provisioned docker machine in 885.727834ms
	I0919 09:39:10.956636    2869 client.go:171] LocalClient.Create took 15.167966416s
	I0919 09:39:10.956642    2869 start.go:167] duration metric: libmachine.API.Create for "image-964000" took 15.168002333s
	I0919 09:39:10.956645    2869 start.go:300] post-start starting for "image-964000" (driver="qemu2")
	I0919 09:39:10.956649    2869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 09:39:10.956728    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 09:39:10.956735    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:10.991003    2869 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 09:39:10.992642    2869 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 09:39:10.992651    2869 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/addons for local assets ...
	I0919 09:39:10.992727    2869 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/files for local assets ...
	I0919 09:39:10.992828    2869 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem -> 20512.pem in /etc/ssl/certs
	I0919 09:39:10.992939    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 09:39:10.995810    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem --> /etc/ssl/certs/20512.pem (1708 bytes)
	I0919 09:39:11.006379    2869 start.go:303] post-start completed in 49.726042ms
	I0919 09:39:11.006801    2869 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/config.json ...
	I0919 09:39:11.006956    2869 start.go:128] duration metric: createHost completed in 15.248127583s
	I0919 09:39:11.006990    2869 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:11.007199    2869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b54760] 0x100b56ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0919 09:39:11.007202    2869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 09:39:11.070559    2869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695141550.607799919
	
	I0919 09:39:11.070564    2869 fix.go:206] guest clock: 1695141550.607799919
	I0919 09:39:11.070567    2869 fix.go:219] Guest: 2023-09-19 09:39:10.607799919 -0700 PDT Remote: 2023-09-19 09:39:11.006957 -0700 PDT m=+15.344362460 (delta=-399.157081ms)
	I0919 09:39:11.070577    2869 fix.go:190] guest clock delta is within tolerance: -399.157081ms
	I0919 09:39:11.070579    2869 start.go:83] releasing machines lock for "image-964000", held for 15.311787625s
	I0919 09:39:11.070841    2869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 09:39:11.070841    2869 ssh_runner.go:195] Run: cat /version.json
	I0919 09:39:11.070853    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:11.070860    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:11.145958    2869 ssh_runner.go:195] Run: systemctl --version
	I0919 09:39:11.147912    2869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 09:39:11.149734    2869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 09:39:11.149763    2869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 09:39:11.154635    2869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 09:39:11.154640    2869 start.go:469] detecting cgroup driver to use...
	I0919 09:39:11.154716    2869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:39:11.160434    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 09:39:11.163952    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 09:39:11.167428    2869 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 09:39:11.167461    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 09:39:11.171009    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:39:11.174294    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 09:39:11.177119    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:39:11.180238    2869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 09:39:11.183739    2869 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 09:39:11.187271    2869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 09:39:11.190252    2869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 09:39:11.192909    2869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:11.274741    2869 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 09:39:11.282732    2869 start.go:469] detecting cgroup driver to use...
	I0919 09:39:11.282791    2869 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 09:39:11.290282    2869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:39:11.295261    2869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 09:39:11.301038    2869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:39:11.305551    2869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:39:11.310076    2869 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 09:39:11.347870    2869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:39:11.353180    2869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:39:11.358445    2869 ssh_runner.go:195] Run: which cri-dockerd
	I0919 09:39:11.359702    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 09:39:11.362685    2869 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 09:39:11.367717    2869 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 09:39:11.443623    2869 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 09:39:11.519541    2869 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 09:39:11.519551    2869 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 09:39:11.524896    2869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:11.600029    2869 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:39:12.754448    2869 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154423834s)
	I0919 09:39:12.754502    2869 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 09:39:12.834725    2869 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 09:39:12.913079    2869 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 09:39:12.995371    2869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:13.070798    2869 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 09:39:13.077551    2869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:13.171723    2869 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 09:39:13.196150    2869 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 09:39:13.196237    2869 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 09:39:13.198481    2869 start.go:537] Will wait 60s for crictl version
	I0919 09:39:13.198525    2869 ssh_runner.go:195] Run: which crictl
	I0919 09:39:13.200380    2869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 09:39:13.223977    2869 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 09:39:13.224048    2869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:39:13.233865    2869 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:39:13.249239    2869 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 09:39:13.249376    2869 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0919 09:39:13.250754    2869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:39:13.254427    2869 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:39:13.254470    2869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:39:13.259715    2869 docker.go:636] Got preloaded images: 
	I0919 09:39:13.259720    2869 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0919 09:39:13.259753    2869 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:39:13.262898    2869 ssh_runner.go:195] Run: which lz4
	I0919 09:39:13.264182    2869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 09:39:13.265511    2869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 09:39:13.265520    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0919 09:39:14.585557    2869 docker.go:600] Took 1.321432 seconds to copy over tarball
	I0919 09:39:14.585614    2869 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 09:39:15.618927    2869 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.033308084s)
	I0919 09:39:15.618939    2869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 09:39:15.634900    2869 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:39:15.638007    2869 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0919 09:39:15.642855    2869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:15.718722    2869 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:39:17.179727    2869 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.461017834s)
	I0919 09:39:17.179819    2869 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:39:17.185514    2869 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 09:39:17.185526    2869 cache_images.go:84] Images are preloaded, skipping loading
	I0919 09:39:17.185581    2869 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 09:39:17.192766    2869 cni.go:84] Creating CNI manager for ""
	I0919 09:39:17.192773    2869 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:39:17.192785    2869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 09:39:17.192795    2869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-964000 NodeName:image-964000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 09:39:17.192859    2869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-964000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 09:39:17.192898    2869 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-964000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:image-964000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 09:39:17.192946    2869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 09:39:17.196263    2869 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 09:39:17.196290    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 09:39:17.199633    2869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0919 09:39:17.204916    2869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 09:39:17.209989    2869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0919 09:39:17.215156    2869 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0919 09:39:17.216538    2869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:39:17.220325    2869 certs.go:56] Setting up /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000 for IP: 192.168.105.5
	I0919 09:39:17.220336    2869 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e0a0ed9a6157106206482b1c6d1a127cc10e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.220474    2869 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key
	I0919 09:39:17.220509    2869 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key
	I0919 09:39:17.220537    2869 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.key
	I0919 09:39:17.220543    2869 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.crt with IP's: []
	I0919 09:39:17.318485    2869 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.crt ...
	I0919 09:39:17.318489    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.crt: {Name:mk91a967f92e10044d9d757fbe23f31b7fe661a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.318704    2869 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.key ...
	I0919 09:39:17.318706    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/client.key: {Name:mk4e370aa3bb268884118b652e9e29f6dd7f9a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.318818    2869 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key.e69b33ca
	I0919 09:39:17.318824    2869 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 09:39:17.418369    2869 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt.e69b33ca ...
	I0919 09:39:17.418371    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt.e69b33ca: {Name:mk3cb593b5a4d447c8cd10549a49437928851a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.418488    2869 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key.e69b33ca ...
	I0919 09:39:17.418490    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key.e69b33ca: {Name:mk4d70002770f375d9c308deafcdb3911054959c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.418586    2869 certs.go:337] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt
	I0919 09:39:17.418824    2869 certs.go:341] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key
	I0919 09:39:17.418960    2869 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.key
	I0919 09:39:17.418970    2869 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.crt with IP's: []
	I0919 09:39:17.542820    2869 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.crt ...
	I0919 09:39:17.542823    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.crt: {Name:mkad3a50a9bbbc311cac3966823f8640b3da1cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.542989    2869 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.key ...
	I0919 09:39:17.542991    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.key: {Name:mkdec9748934a33ad3a34353abc4bc408fa831ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:17.543257    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051.pem (1338 bytes)
	W0919 09:39:17.543289    2869 certs.go:433] ignoring /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051_empty.pem, impossibly tiny 0 bytes
	I0919 09:39:17.543295    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 09:39:17.543318    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem (1082 bytes)
	I0919 09:39:17.543338    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem (1123 bytes)
	I0919 09:39:17.543357    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem (1679 bytes)
	I0919 09:39:17.543401    2869 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem (1708 bytes)
	I0919 09:39:17.543756    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 09:39:17.551732    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 09:39:17.559151    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 09:39:17.565938    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/image-964000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 09:39:17.572601    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 09:39:17.579768    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 09:39:17.586918    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 09:39:17.594810    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 09:39:17.601664    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem --> /usr/share/ca-certificates/20512.pem (1708 bytes)
	I0919 09:39:17.608334    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 09:39:17.615603    2869 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051.pem --> /usr/share/ca-certificates/2051.pem (1338 bytes)
	I0919 09:39:17.622688    2869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 09:39:17.627818    2869 ssh_runner.go:195] Run: openssl version
	I0919 09:39:17.629873    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20512.pem && ln -fs /usr/share/ca-certificates/20512.pem /etc/ssl/certs/20512.pem"
	I0919 09:39:17.632858    2869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20512.pem
	I0919 09:39:17.634453    2869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:35 /usr/share/ca-certificates/20512.pem
	I0919 09:39:17.634478    2869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20512.pem
	I0919 09:39:17.636253    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20512.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 09:39:17.639732    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 09:39:17.643224    2869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:39:17.644985    2869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:34 /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:39:17.645004    2869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:39:17.646821    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 09:39:17.649710    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2051.pem && ln -fs /usr/share/ca-certificates/2051.pem /etc/ssl/certs/2051.pem"
	I0919 09:39:17.652636    2869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2051.pem
	I0919 09:39:17.654122    2869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:35 /usr/share/ca-certificates/2051.pem
	I0919 09:39:17.654152    2869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2051.pem
	I0919 09:39:17.656013    2869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2051.pem /etc/ssl/certs/51391683.0"
	I0919 09:39:17.659293    2869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 09:39:17.660705    2869 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 09:39:17.660735    2869 kubeadm.go:404] StartCluster: {Name:image-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:image-964000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:39:17.660793    2869 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 09:39:17.666353    2869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 09:39:17.669210    2869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 09:39:17.672160    2869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 09:39:17.675156    2869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 09:39:17.675169    2869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 09:39:17.697576    2869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 09:39:17.697608    2869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 09:39:17.760468    2869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 09:39:17.760518    2869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 09:39:17.760572    2869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 09:39:17.856434    2869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 09:39:17.866596    2869 out.go:204]   - Generating certificates and keys ...
	I0919 09:39:17.866628    2869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 09:39:17.866656    2869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 09:39:18.072306    2869 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 09:39:18.163849    2869 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 09:39:18.472438    2869 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 09:39:18.548213    2869 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 09:39:18.619731    2869 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 09:39:18.619790    2869 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-964000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0919 09:39:18.659502    2869 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 09:39:18.659571    2869 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-964000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0919 09:39:18.739463    2869 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 09:39:18.997472    2869 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 09:39:19.095580    2869 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 09:39:19.095608    2869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 09:39:19.238334    2869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 09:39:19.377941    2869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 09:39:19.425891    2869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 09:39:19.565350    2869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 09:39:19.565578    2869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 09:39:19.567452    2869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 09:39:19.572747    2869 out.go:204]   - Booting up control plane ...
	I0919 09:39:19.572827    2869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 09:39:19.572930    2869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 09:39:19.572960    2869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 09:39:19.574850    2869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 09:39:19.574891    2869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 09:39:19.574906    2869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 09:39:19.664094    2869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 09:39:23.665708    2869 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001827 seconds
	I0919 09:39:23.665767    2869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 09:39:23.672181    2869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 09:39:24.182716    2869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 09:39:24.182809    2869 kubeadm.go:322] [mark-control-plane] Marking the node image-964000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 09:39:24.688295    2869 kubeadm.go:322] [bootstrap-token] Using token: 19dc51.pxj6duyjpe27wlwt
	I0919 09:39:24.694551    2869 out.go:204]   - Configuring RBAC rules ...
	I0919 09:39:24.694619    2869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 09:39:24.696250    2869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 09:39:24.703926    2869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 09:39:24.705230    2869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 09:39:24.706414    2869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 09:39:24.707573    2869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 09:39:24.711620    2869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 09:39:24.897180    2869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 09:39:25.098838    2869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 09:39:25.099160    2869 kubeadm.go:322] 
	I0919 09:39:25.099188    2869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 09:39:25.099190    2869 kubeadm.go:322] 
	I0919 09:39:25.099241    2869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 09:39:25.099244    2869 kubeadm.go:322] 
	I0919 09:39:25.099259    2869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 09:39:25.099290    2869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 09:39:25.099314    2869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 09:39:25.099317    2869 kubeadm.go:322] 
	I0919 09:39:25.099345    2869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 09:39:25.099347    2869 kubeadm.go:322] 
	I0919 09:39:25.099374    2869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 09:39:25.099376    2869 kubeadm.go:322] 
	I0919 09:39:25.099401    2869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 09:39:25.099443    2869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 09:39:25.099479    2869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 09:39:25.099481    2869 kubeadm.go:322] 
	I0919 09:39:25.099525    2869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 09:39:25.099564    2869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 09:39:25.099566    2869 kubeadm.go:322] 
	I0919 09:39:25.099608    2869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 19dc51.pxj6duyjpe27wlwt \
	I0919 09:39:25.099663    2869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc \
	I0919 09:39:25.099674    2869 kubeadm.go:322] 	--control-plane 
	I0919 09:39:25.099676    2869 kubeadm.go:322] 
	I0919 09:39:25.099722    2869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 09:39:25.099724    2869 kubeadm.go:322] 
	I0919 09:39:25.099783    2869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 19dc51.pxj6duyjpe27wlwt \
	I0919 09:39:25.099829    2869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc 
	I0919 09:39:25.099890    2869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 09:39:25.099897    2869 cni.go:84] Creating CNI manager for ""
	I0919 09:39:25.099904    2869 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:39:25.108365    2869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 09:39:25.111449    2869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 09:39:25.114611    2869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 09:39:25.119306    2869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 09:39:25.119364    2869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:39:25.119378    2869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=image-964000 minikube.k8s.io/updated_at=2023_09_19T09_39_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:39:25.122537    2869 ops.go:34] apiserver oom_adj: -16
	I0919 09:39:25.187017    2869 kubeadm.go:1081] duration metric: took 67.685167ms to wait for elevateKubeSystemPrivileges.
	I0919 09:39:25.187028    2869 kubeadm.go:406] StartCluster complete in 7.526426167s
	I0919 09:39:25.187036    2869 settings.go:142] acquiring lock: {Name:mk7316c4de97357fafef76bf7f58c3638d00d866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:25.187121    2869 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:39:25.187439    2869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/kubeconfig: {Name:mk0534d05ae1a49ed75724777911378ef3989658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:25.187658    2869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 09:39:25.187683    2869 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 09:39:25.187751    2869 config.go:182] Loaded profile config "image-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:39:25.187751    2869 addons.go:69] Setting storage-provisioner=true in profile "image-964000"
	I0919 09:39:25.187754    2869 addons.go:69] Setting default-storageclass=true in profile "image-964000"
	I0919 09:39:25.187757    2869 addons.go:231] Setting addon storage-provisioner=true in "image-964000"
	I0919 09:39:25.187760    2869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-964000"
	I0919 09:39:25.187776    2869 host.go:66] Checking if "image-964000" exists ...
	I0919 09:39:25.192505    2869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:39:25.196276    2869 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 09:39:25.196279    2869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 09:39:25.196286    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:25.200651    2869 addons.go:231] Setting addon default-storageclass=true in "image-964000"
	I0919 09:39:25.200667    2869 host.go:66] Checking if "image-964000" exists ...
	I0919 09:39:25.201310    2869 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 09:39:25.201313    2869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 09:39:25.201319    2869 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/image-964000/id_rsa Username:docker}
	I0919 09:39:25.204103    2869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-964000" context rescaled to 1 replicas
	I0919 09:39:25.204115    2869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:39:25.211250    2869 out.go:177] * Verifying Kubernetes components...
	I0919 09:39:25.214428    2869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 09:39:25.233500    2869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 09:39:25.233817    2869 api_server.go:52] waiting for apiserver process to appear ...
	I0919 09:39:25.233848    2869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 09:39:25.243349    2869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 09:39:25.273809    2869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 09:39:25.652821    2869 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0919 09:39:25.652837    2869 api_server.go:72] duration metric: took 448.718792ms to wait for apiserver process to appear ...
	I0919 09:39:25.652843    2869 api_server.go:88] waiting for apiserver healthz status ...
	I0919 09:39:25.652850    2869 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0919 09:39:25.656444    2869 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0919 09:39:25.657056    2869 api_server.go:141] control plane version: v1.28.2
	I0919 09:39:25.657060    2869 api_server.go:131] duration metric: took 4.215667ms to wait for apiserver health ...
	I0919 09:39:25.657063    2869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 09:39:25.659900    2869 system_pods.go:59] 4 kube-system pods found
	I0919 09:39:25.659906    2869 system_pods.go:61] "etcd-image-964000" [4ecb7cf6-b6d6-4bd6-a5b5-bb4604b1942f] Pending
	I0919 09:39:25.659909    2869 system_pods.go:61] "kube-apiserver-image-964000" [c91ebdb7-6303-4df4-b264-1ba9a7db3c5d] Pending
	I0919 09:39:25.659911    2869 system_pods.go:61] "kube-controller-manager-image-964000" [b71fbc6a-93b8-4baa-bf1b-eca38e6a1eab] Pending
	I0919 09:39:25.659913    2869 system_pods.go:61] "kube-scheduler-image-964000" [07d997cb-5e9c-4b29-8d22-54686bb18887] Pending
	I0919 09:39:25.659915    2869 system_pods.go:74] duration metric: took 2.850833ms to wait for pod list to return data ...
	I0919 09:39:25.659918    2869 kubeadm.go:581] duration metric: took 455.802584ms to wait for : map[apiserver:true system_pods:true] ...
	I0919 09:39:25.659923    2869 node_conditions.go:102] verifying NodePressure condition ...
	I0919 09:39:25.661438    2869 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0919 09:39:25.661444    2869 node_conditions.go:123] node cpu capacity is 2
	I0919 09:39:25.661449    2869 node_conditions.go:105] duration metric: took 1.524459ms to run NodePressure ...
	I0919 09:39:25.661453    2869 start.go:228] waiting for startup goroutines ...
	I0919 09:39:25.711867    2869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 09:39:25.718820    2869 addons.go:502] enable addons completed in 531.155125ms: enabled=[storage-provisioner default-storageclass]
	I0919 09:39:25.718831    2869 start.go:233] waiting for cluster config update ...
	I0919 09:39:25.718835    2869 start.go:242] writing updated cluster config ...
	I0919 09:39:25.719062    2869 ssh_runner.go:195] Run: rm -f paused
	I0919 09:39:25.748597    2869 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0919 09:39:25.751722    2869 out.go:177] * Done! kubectl is now configured to use "image-964000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 16:39:06 UTC, ends at Tue 2023-09-19 16:39:27 UTC. --
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.196300257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.196345840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.196357132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.196364048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:20 image-964000 cri-dockerd[1064]: time="2023-09-19T16:39:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9bcb0efc65ba584b9e09f4208d4187af6f38260af89104321206c63ea71cc425/resolv.conf as [nameserver 192.168.105.1]"
	Sep 19 16:39:20 image-964000 cri-dockerd[1064]: time="2023-09-19T16:39:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a203a5de89b9149295c5f11bbff15fc6c9745ab70fe7727220be8e3ddb54e67d/resolv.conf as [nameserver 192.168.105.1]"
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.252469715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.252540798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.252552882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.252561965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.270812048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.270992548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.271025465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:39:20 image-964000 dockerd[1177]: time="2023-09-19T16:39:20.271074215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:26 image-964000 dockerd[1171]: time="2023-09-19T16:39:26.343333468Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 19 16:39:26 image-964000 dockerd[1171]: time="2023-09-19T16:39:26.462631968Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 19 16:39:26 image-964000 dockerd[1171]: time="2023-09-19T16:39:26.479589885Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 19 16:39:26 image-964000 dockerd[1177]: time="2023-09-19T16:39:26.514689343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:39:26 image-964000 dockerd[1177]: time="2023-09-19T16:39:26.514885551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:26 image-964000 dockerd[1177]: time="2023-09-19T16:39:26.515070676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:39:26 image-964000 dockerd[1177]: time="2023-09-19T16:39:26.515075843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:39:27 image-964000 dockerd[1171]: time="2023-09-19T16:39:27.209704119Z" level=info msg="ignoring event" container=d4d3b1901b0e347cac6d97997ee6dcd4dfd6ee24fc6bafb991217d9ec5c24385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:39:27 image-964000 dockerd[1177]: time="2023-09-19T16:39:27.210080827Z" level=info msg="shim disconnected" id=d4d3b1901b0e347cac6d97997ee6dcd4dfd6ee24fc6bafb991217d9ec5c24385 namespace=moby
	Sep 19 16:39:27 image-964000 dockerd[1177]: time="2023-09-19T16:39:27.210110869Z" level=warning msg="cleaning up after shim disconnected" id=d4d3b1901b0e347cac6d97997ee6dcd4dfd6ee24fc6bafb991217d9ec5c24385 namespace=moby
	Sep 19 16:39:27 image-964000 dockerd[1177]: time="2023-09-19T16:39:27.210115244Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f4f52b36c3d9       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   a203a5de89b91       etcd-image-964000
	e0350e461935d       64fc40cee3716       7 seconds ago       Running             kube-scheduler            0                   9bcb0efc65ba5       kube-scheduler-image-964000
	ecf9672460f46       89d57b83c1786       7 seconds ago       Running             kube-controller-manager   0                   c4c0d0f4427f0       kube-controller-manager-image-964000
	b76687872ed24       30bb499447fe1       7 seconds ago       Running             kube-apiserver            0                   695e8aa4ef240       kube-apiserver-image-964000
	
	* 
	* ==> describe nodes <==
	* Name:               image-964000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-964000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=image-964000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T09_39_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:39:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-964000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:39:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:39:24 +0000   Tue, 19 Sep 2023 16:39:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:39:24 +0000   Tue, 19 Sep 2023 16:39:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:39:24 +0000   Tue, 19 Sep 2023 16:39:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 19 Sep 2023 16:39:24 +0000   Tue, 19 Sep 2023 16:39:21 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-964000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 88c353c4969d4fdf99d48c0e0de21b18
	  System UUID:                88c353c4969d4fdf99d48c0e0de21b18
	  Boot ID:                    8a3d2f22-a566-4223-9a79-7719f809bea6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-964000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-964000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-964000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-964000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-964000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-964000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-964000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep19 16:39] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.666305] EINJ: EINJ table not found.
	[  +0.538331] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.042787] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000852] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.213001] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.068067] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.440401] systemd-fstab-generator[760]: Ignoring "noauto" for root device
	[  +0.171848] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +0.073544] systemd-fstab-generator[810]: Ignoring "noauto" for root device
	[  +0.081639] systemd-fstab-generator[823]: Ignoring "noauto" for root device
	[  +1.232075] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.079152] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.084176] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.076484] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +0.098441] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.547067] systemd-fstab-generator[1164]: Ignoring "noauto" for root device
	[  +1.443008] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.498126] systemd-fstab-generator[1544]: Ignoring "noauto" for root device
	[  +5.134391] systemd-fstab-generator[2450]: Ignoring "noauto" for root device
	[  +2.201247] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [8f4f52b36c3d] <==
	* {"level":"info","ts":"2023-09-19T16:39:20.414154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-19T16:39:20.414384Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-19T16:39:20.416654Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T16:39:20.416749Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-19T16:39:20.417428Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-19T16:39:20.417886Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T16:39:20.417942Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T16:39:21.011189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T16:39:21.011268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T16:39:21.011291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-19T16:39:21.011325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T16:39:21.011342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-19T16:39:21.011364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-19T16:39:21.011398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-19T16:39:21.012096Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-964000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T16:39:21.012135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:39:21.012579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T16:39:21.012639Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:39:21.012734Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:39:21.013103Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-19T16:39:21.014554Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:39:21.014771Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:39:21.017294Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:39:21.021812Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T16:39:21.021844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  16:39:27 up 0 min,  0 users,  load average: 0.62, 0.13, 0.04
	Linux image-964000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b76687872ed2] <==
	* I0919 16:39:21.645028       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 16:39:21.646220       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 16:39:21.647921       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 16:39:21.646925       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 16:39:21.647196       1 controller.go:624] quota admission added evaluator for: namespaces
	I0919 16:39:21.662528       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 16:39:21.662785       1 aggregator.go:166] initial CRD sync complete...
	I0919 16:39:21.662828       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 16:39:21.662850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 16:39:21.662858       1 cache.go:39] Caches are synced for autoregister controller
	I0919 16:39:21.674050       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 16:39:21.680849       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 16:39:22.547945       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 16:39:22.549413       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 16:39:22.549421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 16:39:22.701114       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:39:22.712240       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 16:39:22.752501       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 16:39:22.754404       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0919 16:39:22.754777       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 16:39:22.756006       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:39:23.584490       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 16:39:24.430127       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 16:39:24.434311       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 16:39:24.437810       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ecf9672460f4] <==
	* I0919 16:39:23.585855       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0919 16:39:23.585861       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0919 16:39:23.585864       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0919 16:39:23.589878       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0919 16:39:23.589950       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0919 16:39:23.589957       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0919 16:39:23.592802       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0919 16:39:23.592889       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0919 16:39:23.592895       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0919 16:39:23.595878       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0919 16:39:23.595997       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0919 16:39:23.596004       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0919 16:39:23.599606       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0919 16:39:23.599669       1 disruption.go:437] "Sending events to api server."
	I0919 16:39:23.599686       1 disruption.go:448] "Starting disruption controller"
	I0919 16:39:23.599689       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0919 16:39:23.602079       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0919 16:39:23.602149       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0919 16:39:23.604321       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0919 16:39:23.604365       1 ttl_controller.go:124] "Starting TTL controller"
	I0919 16:39:23.604369       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0919 16:39:23.606613       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0919 16:39:23.606727       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0919 16:39:23.606733       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0919 16:39:23.674142       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [e0350e461935] <==
	* W0919 16:39:21.626648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:39:21.626655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 16:39:21.626683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:39:21.626691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 16:39:21.626739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:39:21.626742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0919 16:39:21.626769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:39:21.626772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 16:39:21.626792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 16:39:21.626800       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 16:39:21.626894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:39:21.626935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 16:39:22.440645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 16:39:22.440665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 16:39:22.506232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 16:39:22.506248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 16:39:22.528959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:39:22.528980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0919 16:39:22.541650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 16:39:22.541663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 16:39:22.543169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:39:22.543178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 16:39:22.572010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:39:22.572037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0919 16:39:22.923092       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:39:06 UTC, ends at Tue 2023-09-19 16:39:27 UTC. --
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.584896    2470 kubelet_node_status.go:70] "Attempting to register node" node="image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.588929    2470 kubelet_node_status.go:108] "Node was previously registered" node="image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.588965    2470 kubelet_node_status.go:73] "Successfully registered node" node="image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.591792    2470 topology_manager.go:215] "Topology Admit Handler" podUID="c4bc670169b22f0de0472cf0269037aa" podNamespace="kube-system" podName="etcd-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.592936    2470 topology_manager.go:215] "Topology Admit Handler" podUID="7f851975d4823b3916adfd77da574061" podNamespace="kube-system" podName="kube-apiserver-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.593004    2470 topology_manager.go:215] "Topology Admit Handler" podUID="58c8f5d2dfe3be38915ab0ea236a8946" podNamespace="kube-system" podName="kube-controller-manager-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.593035    2470 topology_manager.go:215] "Topology Admit Handler" podUID="317eb6d9d2c813174edfc0666b4b8811" podNamespace="kube-system" podName="kube-scheduler-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.785953    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/317eb6d9d2c813174edfc0666b4b8811-kubeconfig\") pod \"kube-scheduler-image-964000\" (UID: \"317eb6d9d2c813174edfc0666b4b8811\") " pod="kube-system/kube-scheduler-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.785987    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c4bc670169b22f0de0472cf0269037aa-etcd-certs\") pod \"etcd-image-964000\" (UID: \"c4bc670169b22f0de0472cf0269037aa\") " pod="kube-system/etcd-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.785999    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c4bc670169b22f0de0472cf0269037aa-etcd-data\") pod \"etcd-image-964000\" (UID: \"c4bc670169b22f0de0472cf0269037aa\") " pod="kube-system/etcd-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786009    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f851975d4823b3916adfd77da574061-k8s-certs\") pod \"kube-apiserver-image-964000\" (UID: \"7f851975d4823b3916adfd77da574061\") " pod="kube-system/kube-apiserver-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786020    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f851975d4823b3916adfd77da574061-usr-share-ca-certificates\") pod \"kube-apiserver-image-964000\" (UID: \"7f851975d4823b3916adfd77da574061\") " pod="kube-system/kube-apiserver-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786030    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58c8f5d2dfe3be38915ab0ea236a8946-kubeconfig\") pod \"kube-controller-manager-image-964000\" (UID: \"58c8f5d2dfe3be38915ab0ea236a8946\") " pod="kube-system/kube-controller-manager-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786039    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f851975d4823b3916adfd77da574061-ca-certs\") pod \"kube-apiserver-image-964000\" (UID: \"7f851975d4823b3916adfd77da574061\") " pod="kube-system/kube-apiserver-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786058    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58c8f5d2dfe3be38915ab0ea236a8946-ca-certs\") pod \"kube-controller-manager-image-964000\" (UID: \"58c8f5d2dfe3be38915ab0ea236a8946\") " pod="kube-system/kube-controller-manager-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786068    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58c8f5d2dfe3be38915ab0ea236a8946-flexvolume-dir\") pod \"kube-controller-manager-image-964000\" (UID: \"58c8f5d2dfe3be38915ab0ea236a8946\") " pod="kube-system/kube-controller-manager-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786076    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58c8f5d2dfe3be38915ab0ea236a8946-k8s-certs\") pod \"kube-controller-manager-image-964000\" (UID: \"58c8f5d2dfe3be38915ab0ea236a8946\") " pod="kube-system/kube-controller-manager-image-964000"
	Sep 19 16:39:24 image-964000 kubelet[2470]: I0919 16:39:24.786089    2470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58c8f5d2dfe3be38915ab0ea236a8946-usr-share-ca-certificates\") pod \"kube-controller-manager-image-964000\" (UID: \"58c8f5d2dfe3be38915ab0ea236a8946\") " pod="kube-system/kube-controller-manager-image-964000"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.464155    2470 apiserver.go:52] "Watching apiserver"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.484788    2470 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 16:39:25 image-964000 kubelet[2470]: E0919 16:39:25.538647    2470 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-image-964000\" already exists" pod="kube-system/etcd-image-964000"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.539821    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-964000" podStartSLOduration=1.5397976340000001 podCreationTimestamp="2023-09-19 16:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:39:25.539704842 +0000 UTC m=+1.120259168" watchObservedRunningTime="2023-09-19 16:39:25.539797634 +0000 UTC m=+1.120351960"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.546752    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-964000" podStartSLOduration=1.546730259 podCreationTimestamp="2023-09-19 16:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:39:25.543314426 +0000 UTC m=+1.123868751" watchObservedRunningTime="2023-09-19 16:39:25.546730259 +0000 UTC m=+1.127284585"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.551383    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-964000" podStartSLOduration=1.5513644260000001 podCreationTimestamp="2023-09-19 16:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:39:25.546786759 +0000 UTC m=+1.127341043" watchObservedRunningTime="2023-09-19 16:39:25.551364426 +0000 UTC m=+1.131918751"
	Sep 19 16:39:25 image-964000 kubelet[2470]: I0919 16:39:25.554978    2470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-964000" podStartSLOduration=1.554964134 podCreationTimestamp="2023-09-19 16:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:39:25.551550926 +0000 UTC m=+1.132105210" watchObservedRunningTime="2023-09-19 16:39:25.554964134 +0000 UTC m=+1.135518460"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-964000 -n image-964000
helpers_test.go:261: (dbg) Run:  kubectl --context image-964000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-964000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-964000 describe pod storage-provisioner: exit status 1 (36.831208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-964000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (48.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-969000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-969000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.459880625s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-969000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-969000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9a149ea8-b235-4754-a3f7-00063afa2ad3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9a149ea8-b235-4754-a3f7-00063afa2ad3] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.014194042s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-969000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.0278425s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons disable ingress-dns --alsologtostderr -v=1: (4.19136375s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons disable ingress --alsologtostderr -v=1: (7.0915535s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-969000 -n ingress-addon-legacy-969000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | -p functional-085000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-085000 ssh pgrep              | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-085000 image build -t         | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | localhost/my-image:functional-085000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-085000 image ls               | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	| image          | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-085000                        | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-085000                     | functional-085000           | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:38 PDT |
	| start          | -p image-964000 --driver=qemu2           | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:38 PDT | 19 Sep 23 09:39 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-964000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-964000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-964000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-964000                          |                             |         |         |                     |                     |
	| delete         | -p image-964000                          | image-964000                | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:39 PDT |
	| start          | -p ingress-addon-legacy-969000           | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:39 PDT | 19 Sep 23 09:40 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-969000              | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:40 PDT | 19 Sep 23 09:40 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-969000              | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:40 PDT | 19 Sep 23 09:40 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-969000              | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:41 PDT | 19 Sep 23 09:41 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-969000 ip           | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:41 PDT | 19 Sep 23 09:41 PDT |
	| addons         | ingress-addon-legacy-969000              | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:41 PDT | 19 Sep 23 09:41 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-969000              | ingress-addon-legacy-969000 | jenkins | v1.31.2 | 19 Sep 23 09:41 PDT | 19 Sep 23 09:41 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:39:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:39:28.240144    2910 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:39:28.240276    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:39:28.240279    2910 out.go:309] Setting ErrFile to fd 2...
	I0919 09:39:28.240281    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:39:28.240417    2910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:39:28.241426    2910 out.go:303] Setting JSON to false
	I0919 09:39:28.256534    2910 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":542,"bootTime":1695141026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:39:28.256624    2910 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:39:28.260900    2910 out.go:177] * [ingress-addon-legacy-969000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:39:28.268869    2910 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:39:28.272869    2910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:39:28.268916    2910 notify.go:220] Checking for updates...
	I0919 09:39:28.275802    2910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:39:28.278831    2910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:39:28.282870    2910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:39:28.301898    2910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:39:28.305085    2910 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:39:28.308744    2910 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:39:28.315838    2910 start.go:298] selected driver: qemu2
	I0919 09:39:28.315846    2910 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:39:28.315852    2910 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:39:28.318011    2910 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:39:28.320809    2910 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:39:28.323967    2910 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:39:28.323996    2910 cni.go:84] Creating CNI manager for ""
	I0919 09:39:28.324012    2910 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:39:28.324016    2910 start_flags.go:321] config:
	{Name:ingress-addon-legacy-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:39:28.328608    2910 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:39:28.333827    2910 out.go:177] * Starting control plane node ingress-addon-legacy-969000 in cluster ingress-addon-legacy-969000
	I0919 09:39:28.337842    2910 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0919 09:39:28.391898    2910 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0919 09:39:28.391920    2910 cache.go:57] Caching tarball of preloaded images
	I0919 09:39:28.392158    2910 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0919 09:39:28.396868    2910 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0919 09:39:28.404874    2910 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:39:28.486696    2910 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0919 09:39:37.357380    2910 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:39:37.357529    2910 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:39:38.109779    2910 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0919 09:39:38.109971    2910 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/config.json ...
	I0919 09:39:38.109993    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/config.json: {Name:mk0ec65134096c011fa6204ed9d7567715607e2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:39:38.110238    2910 start.go:365] acquiring machines lock for ingress-addon-legacy-969000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:39:38.110264    2910 start.go:369] acquired machines lock for "ingress-addon-legacy-969000" in 20.667µs
	I0919 09:39:38.110274    2910 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:39:38.110303    2910 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:39:38.119238    2910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0919 09:39:38.133668    2910 start.go:159] libmachine.API.Create for "ingress-addon-legacy-969000" (driver="qemu2")
	I0919 09:39:38.133690    2910 client.go:168] LocalClient.Create starting
	I0919 09:39:38.133768    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:39:38.133795    2910 main.go:141] libmachine: Decoding PEM data...
	I0919 09:39:38.133810    2910 main.go:141] libmachine: Parsing certificate...
	I0919 09:39:38.133849    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:39:38.133868    2910 main.go:141] libmachine: Decoding PEM data...
	I0919 09:39:38.133877    2910 main.go:141] libmachine: Parsing certificate...
	I0919 09:39:38.134233    2910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:39:38.374025    2910 main.go:141] libmachine: Creating SSH key...
	I0919 09:39:38.524712    2910 main.go:141] libmachine: Creating Disk image...
	I0919 09:39:38.524718    2910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:39:38.524947    2910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2
	I0919 09:39:38.533815    2910 main.go:141] libmachine: STDOUT: 
	I0919 09:39:38.533831    2910 main.go:141] libmachine: STDERR: 
	I0919 09:39:38.533898    2910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2 +20000M
	I0919 09:39:38.541041    2910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:39:38.541052    2910 main.go:141] libmachine: STDERR: 
	I0919 09:39:38.541072    2910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2
	I0919 09:39:38.541081    2910 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:39:38.541119    2910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:66:6b:dd:98:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/disk.qcow2
	I0919 09:39:38.574726    2910 main.go:141] libmachine: STDOUT: 
	I0919 09:39:38.574755    2910 main.go:141] libmachine: STDERR: 
	I0919 09:39:38.574760    2910 main.go:141] libmachine: Attempt 0
	I0919 09:39:38.574774    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:38.574839    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:38.574859    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:38.574866    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:38.574872    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:38.574877    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:40.577010    2910 main.go:141] libmachine: Attempt 1
	I0919 09:39:40.577090    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:40.577563    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:40.577615    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:40.577662    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:40.577695    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:40.577729    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:42.579873    2910 main.go:141] libmachine: Attempt 2
	I0919 09:39:42.579899    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:42.579993    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:42.580004    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:42.580009    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:42.580027    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:42.580032    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:44.582132    2910 main.go:141] libmachine: Attempt 3
	I0919 09:39:44.582149    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:44.582207    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:44.582214    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:44.582221    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:44.582225    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:44.582230    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:46.584225    2910 main.go:141] libmachine: Attempt 4
	I0919 09:39:46.584246    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:46.584278    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:46.584287    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:46.584294    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:46.584299    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:46.584304    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:48.585931    2910 main.go:141] libmachine: Attempt 5
	I0919 09:39:48.585948    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:48.586026    2910 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0919 09:39:48.586035    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ee:fa:3c:88:3c:79 ID:1,ee:fa:3c:88:3c:79 Lease:0x650b202a}
	I0919 09:39:48.586042    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:2e:f:83:d4:c1:c8 ID:1,2e:f:83:d4:c1:c8 Lease:0x650b1f5f}
	I0919 09:39:48.586048    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:e2:56:fc:9d:f3:2c ID:1,e2:56:fc:9d:f3:2c Lease:0x6509cdd3}
	I0919 09:39:48.586061    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:4e:fc:6b:9f:ce:3d ID:1,4e:fc:6b:9f:ce:3d Lease:0x650b1f11}
	I0919 09:39:50.588129    2910 main.go:141] libmachine: Attempt 6
	I0919 09:39:50.588224    2910 main.go:141] libmachine: Searching for 66:66:6b:dd:98:91 in /var/db/dhcpd_leases ...
	I0919 09:39:50.588355    2910 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0919 09:39:50.588370    2910 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:66:66:6b:dd:98:91 ID:1,66:66:6b:dd:98:91 Lease:0x650b2055}
	I0919 09:39:50.588377    2910 main.go:141] libmachine: Found match: 66:66:6b:dd:98:91
	I0919 09:39:50.588388    2910 main.go:141] libmachine: IP: 192.168.105.6
	I0919 09:39:50.588400    2910 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0919 09:39:52.609249    2910 machine.go:88] provisioning docker machine ...
	I0919 09:39:52.609299    2910 buildroot.go:166] provisioning hostname "ingress-addon-legacy-969000"
	I0919 09:39:52.609480    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:52.610205    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:52.610227    2910 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-969000 && echo "ingress-addon-legacy-969000" | sudo tee /etc/hostname
	I0919 09:39:52.708349    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-969000
	
	I0919 09:39:52.708492    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:52.709026    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:52.709048    2910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-969000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-969000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-969000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 09:39:52.789585    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 09:39:52.789604    2910 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17240-943/.minikube CaCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17240-943/.minikube}
	I0919 09:39:52.789625    2910 buildroot.go:174] setting up certificates
	I0919 09:39:52.789637    2910 provision.go:83] configureAuth start
	I0919 09:39:52.789644    2910 provision.go:138] copyHostCerts
	I0919 09:39:52.789714    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem
	I0919 09:39:52.789807    2910 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem, removing ...
	I0919 09:39:52.789816    2910 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem
	I0919 09:39:52.790024    2910 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/key.pem (1679 bytes)
	I0919 09:39:52.790260    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem
	I0919 09:39:52.790290    2910 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem, removing ...
	I0919 09:39:52.790295    2910 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem
	I0919 09:39:52.790385    2910 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/ca.pem (1082 bytes)
	I0919 09:39:52.790515    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem
	I0919 09:39:52.790552    2910 exec_runner.go:144] found /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem, removing ...
	I0919 09:39:52.790555    2910 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem
	I0919 09:39:52.790627    2910 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17240-943/.minikube/cert.pem (1123 bytes)
	I0919 09:39:52.790750    2910 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-969000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-969000]
	I0919 09:39:52.926789    2910 provision.go:172] copyRemoteCerts
	I0919 09:39:52.926836    2910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 09:39:52.926846    2910 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/id_rsa Username:docker}
	I0919 09:39:52.962747    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 09:39:52.962808    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 09:39:52.970337    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 09:39:52.970381    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0919 09:39:52.977266    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 09:39:52.977313    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 09:39:52.984075    2910 provision.go:86] duration metric: configureAuth took 194.43575ms
	I0919 09:39:52.984082    2910 buildroot.go:189] setting minikube options for container-runtime
	I0919 09:39:52.984178    2910 config.go:182] Loaded profile config "ingress-addon-legacy-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0919 09:39:52.984213    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:52.984423    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:52.984429    2910 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 09:39:53.048764    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 09:39:53.048772    2910 buildroot.go:70] root file system type: tmpfs
	I0919 09:39:53.048828    2910 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 09:39:53.048886    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:53.049149    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:53.049187    2910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 09:39:53.118782    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 09:39:53.118835    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:53.119098    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:53.119109    2910 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 09:39:53.489309    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 09:39:53.489326    2910 machine.go:91] provisioned docker machine in 880.068167ms
	I0919 09:39:53.489331    2910 client.go:171] LocalClient.Create took 15.355903042s
	I0919 09:39:53.489348    2910 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-969000" took 15.355953584s
	I0919 09:39:53.489355    2910 start.go:300] post-start starting for "ingress-addon-legacy-969000" (driver="qemu2")
	I0919 09:39:53.489360    2910 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 09:39:53.489427    2910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 09:39:53.489436    2910 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/id_rsa Username:docker}
	I0919 09:39:53.527446    2910 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 09:39:53.529025    2910 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 09:39:53.529032    2910 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/addons for local assets ...
	I0919 09:39:53.529110    2910 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17240-943/.minikube/files for local assets ...
	I0919 09:39:53.529216    2910 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem -> 20512.pem in /etc/ssl/certs
	I0919 09:39:53.529220    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem -> /etc/ssl/certs/20512.pem
	I0919 09:39:53.529374    2910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 09:39:53.531917    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem --> /etc/ssl/certs/20512.pem (1708 bytes)
	I0919 09:39:53.539292    2910 start.go:303] post-start completed in 49.933292ms
	I0919 09:39:53.539658    2910 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/config.json ...
	I0919 09:39:53.539816    2910 start.go:128] duration metric: createHost completed in 15.429779083s
	I0919 09:39:53.539846    2910 main.go:141] libmachine: Using SSH client type: native
	I0919 09:39:53.540060    2910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103138760] 0x10313aed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0919 09:39:53.540067    2910 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 09:39:53.604172    2910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695141593.605863294
	
	I0919 09:39:53.604178    2910 fix.go:206] guest clock: 1695141593.605863294
	I0919 09:39:53.604182    2910 fix.go:219] Guest: 2023-09-19 09:39:53.605863294 -0700 PDT Remote: 2023-09-19 09:39:53.539819 -0700 PDT m=+25.318717918 (delta=66.044294ms)
	I0919 09:39:53.604193    2910 fix.go:190] guest clock delta is within tolerance: 66.044294ms
	I0919 09:39:53.604196    2910 start.go:83] releasing machines lock for "ingress-addon-legacy-969000", held for 15.494198666s
	I0919 09:39:53.604446    2910 ssh_runner.go:195] Run: cat /version.json
	I0919 09:39:53.604454    2910 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/id_rsa Username:docker}
	I0919 09:39:53.604459    2910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 09:39:53.604480    2910 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/id_rsa Username:docker}
	I0919 09:39:53.683882    2910 ssh_runner.go:195] Run: systemctl --version
	I0919 09:39:53.686193    2910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 09:39:53.688388    2910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 09:39:53.688422    2910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0919 09:39:53.692036    2910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0919 09:39:53.697718    2910 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 09:39:53.697727    2910 start.go:469] detecting cgroup driver to use...
	I0919 09:39:53.697800    2910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:39:53.705062    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0919 09:39:53.708254    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 09:39:53.711521    2910 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 09:39:53.711547    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 09:39:53.714888    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:39:53.718155    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 09:39:53.721215    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 09:39:53.724052    2910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 09:39:53.727311    2910 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 09:39:53.730623    2910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 09:39:53.733395    2910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 09:39:53.736134    2910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:53.819622    2910 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 09:39:53.830063    2910 start.go:469] detecting cgroup driver to use...
	I0919 09:39:53.830132    2910 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 09:39:53.835278    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:39:53.840222    2910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 09:39:53.848373    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 09:39:53.852963    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:39:53.857612    2910 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 09:39:53.899108    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 09:39:53.904667    2910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 09:39:53.910153    2910 ssh_runner.go:195] Run: which cri-dockerd
	I0919 09:39:53.911414    2910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 09:39:53.914410    2910 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 09:39:53.919681    2910 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 09:39:54.005432    2910 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 09:39:54.079279    2910 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 09:39:54.079296    2910 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 09:39:54.084628    2910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:54.168106    2910 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:39:55.331003    2910 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162901125s)
	I0919 09:39:55.331081    2910 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:39:55.340551    2910 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 09:39:55.355740    2910 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0919 09:39:55.355871    2910 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0919 09:39:55.357353    2910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:39:55.361026    2910 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0919 09:39:55.361071    2910 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:39:55.366089    2910 docker.go:636] Got preloaded images: 
	I0919 09:39:55.366096    2910 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0919 09:39:55.366134    2910 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:39:55.368916    2910 ssh_runner.go:195] Run: which lz4
	I0919 09:39:55.369928    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0919 09:39:55.370008    2910 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 09:39:55.371172    2910 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 09:39:55.371187    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0919 09:39:57.072083    2910 docker.go:600] Took 1.702133 seconds to copy over tarball
	I0919 09:39:57.072140    2910 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 09:39:58.367014    2910 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.294881s)
	I0919 09:39:58.367026    2910 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 09:39:58.387616    2910 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 09:39:58.391719    2910 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0919 09:39:58.398504    2910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 09:39:58.472300    2910 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 09:39:59.752790    2910 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.280495s)
	I0919 09:39:59.752886    2910 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 09:39:59.759111    2910 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0919 09:39:59.759119    2910 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0919 09:39:59.759123    2910 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 09:39:59.768469    2910 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0919 09:39:59.768495    2910 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0919 09:39:59.768534    2910 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 09:39:59.768610    2910 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0919 09:39:59.768728    2910 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 09:39:59.768801    2910 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 09:39:59.771752    2910 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:39:59.771854    2910 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0919 09:39:59.778988    2910 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 09:39:59.779104    2910 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 09:39:59.779133    2910 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0919 09:39:59.779147    2910 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0919 09:39:59.779194    2910 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 09:39:59.779861    2910 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0919 09:39:59.781269    2910 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:39:59.782246    2910 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0919 09:40:00.337855    2910 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:00.337977    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0919 09:40:00.344322    2910 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0919 09:40:00.344351    2910 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 09:40:00.344392    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0919 09:40:00.349983    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0919 09:40:00.408696    2910 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:00.408857    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0919 09:40:00.415043    2910 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0919 09:40:00.415068    2910 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 09:40:00.415110    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0919 09:40:00.421080    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0919 09:40:00.622746    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0919 09:40:00.629110    2910 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0919 09:40:00.629132    2910 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0919 09:40:00.629173    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0919 09:40:00.635163    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0919 09:40:00.856234    2910 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:00.856347    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0919 09:40:00.865409    2910 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0919 09:40:00.865432    2910 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0919 09:40:00.865471    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0919 09:40:00.871391    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0919 09:40:01.018398    2910 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:01.018531    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 09:40:01.024972    2910 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0919 09:40:01.025003    2910 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 09:40:01.025063    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 09:40:01.031088    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0919 09:40:01.242060    2910 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:01.242185    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0919 09:40:01.248418    2910 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0919 09:40:01.248440    2910 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0919 09:40:01.248483    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0919 09:40:01.254570    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0919 09:40:01.632458    2910 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:01.632921    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0919 09:40:01.652552    2910 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0919 09:40:01.652609    2910 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0919 09:40:01.652712    2910 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0919 09:40:01.666664    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0919 09:40:02.082024    2910 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 09:40:02.082505    2910 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:40:02.105921    2910 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0919 09:40:02.105971    2910 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:40:02.106114    2910 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:40:02.130852    2910 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 09:40:02.130940    2910 cache_images.go:92] LoadImages completed in 2.371851459s
	W0919 09:40:02.131009    2910 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0919 09:40:02.131102    2910 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 09:40:02.145806    2910 cni.go:84] Creating CNI manager for ""
	I0919 09:40:02.145818    2910 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:40:02.145828    2910 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 09:40:02.145841    2910 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-969000 NodeName:ingress-addon-legacy-969000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 09:40:02.145957    2910 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-969000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 09:40:02.146014    2910 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-969000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 09:40:02.146082    2910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0919 09:40:02.150961    2910 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 09:40:02.151008    2910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 09:40:02.155029    2910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0919 09:40:02.161287    2910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0919 09:40:02.167179    2910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0919 09:40:02.172783    2910 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0919 09:40:02.173983    2910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 09:40:02.177594    2910 certs.go:56] Setting up /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000 for IP: 192.168.105.6
	I0919 09:40:02.177604    2910 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e0a0ed9a6157106206482b1c6d1a127cc10e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.177730    2910 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key
	I0919 09:40:02.177783    2910 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key
	I0919 09:40:02.177813    2910 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key
	I0919 09:40:02.177818    2910 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt with IP's: []
	I0919 09:40:02.284391    2910 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt ...
	I0919 09:40:02.284395    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: {Name:mk368b4badb5e2b2a52af06dcc1198feda2ad881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.284626    2910 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key ...
	I0919 09:40:02.284629    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key: {Name:mk963eb9e36643b0ae633be217176c75aa226e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.284747    2910 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key.b354f644
	I0919 09:40:02.284754    2910 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 09:40:02.358151    2910 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt.b354f644 ...
	I0919 09:40:02.358154    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt.b354f644: {Name:mk65d129ff65ae37866aae6f0c3470e98ff6c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.358307    2910 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key.b354f644 ...
	I0919 09:40:02.358310    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key.b354f644: {Name:mk166cf2b83d0048d6fe6fe1ce475f30000d9eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.358418    2910 certs.go:337] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt
	I0919 09:40:02.358568    2910 certs.go:341] copying /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key
	I0919 09:40:02.358691    2910 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.key
	I0919 09:40:02.358698    2910 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.crt with IP's: []
	I0919 09:40:02.424795    2910 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.crt ...
	I0919 09:40:02.424800    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.crt: {Name:mk7a6fd98d386e33012657014e86291fe0589b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.424946    2910 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.key ...
	I0919 09:40:02.424949    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.key: {Name:mk3f4d4c6dc4e17b39cc5ab7d03fa175b629f8e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:02.425053    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 09:40:02.425068    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 09:40:02.425080    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 09:40:02.425091    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 09:40:02.425103    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 09:40:02.425118    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 09:40:02.425132    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 09:40:02.425143    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 09:40:02.425219    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051.pem (1338 bytes)
	W0919 09:40:02.425251    2910 certs.go:433] ignoring /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051_empty.pem, impossibly tiny 0 bytes
	I0919 09:40:02.425257    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 09:40:02.425283    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem (1082 bytes)
	I0919 09:40:02.425303    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem (1123 bytes)
	I0919 09:40:02.425323    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/Users/jenkins/minikube-integration/17240-943/.minikube/certs/key.pem (1679 bytes)
	I0919 09:40:02.425383    2910 certs.go:437] found cert: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem (1708 bytes)
	I0919 09:40:02.425403    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051.pem -> /usr/share/ca-certificates/2051.pem
	I0919 09:40:02.425413    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem -> /usr/share/ca-certificates/20512.pem
	I0919 09:40:02.425425    2910 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:40:02.425735    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 09:40:02.433294    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 09:40:02.439998    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 09:40:02.447214    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 09:40:02.454472    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 09:40:02.461688    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 09:40:02.468573    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 09:40:02.475322    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 09:40:02.482728    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/certs/2051.pem --> /usr/share/ca-certificates/2051.pem (1338 bytes)
	I0919 09:40:02.489989    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/ssl/certs/20512.pem --> /usr/share/ca-certificates/20512.pem (1708 bytes)
	I0919 09:40:02.496687    2910 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 09:40:02.503512    2910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 09:40:02.508912    2910 ssh_runner.go:195] Run: openssl version
	I0919 09:40:02.510892    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20512.pem && ln -fs /usr/share/ca-certificates/20512.pem /etc/ssl/certs/20512.pem"
	I0919 09:40:02.514516    2910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20512.pem
	I0919 09:40:02.516049    2910 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:35 /usr/share/ca-certificates/20512.pem
	I0919 09:40:02.516070    2910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20512.pem
	I0919 09:40:02.518027    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20512.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 09:40:02.521091    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 09:40:02.524052    2910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:40:02.525545    2910 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:34 /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:40:02.525566    2910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 09:40:02.527441    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 09:40:02.531003    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2051.pem && ln -fs /usr/share/ca-certificates/2051.pem /etc/ssl/certs/2051.pem"
	I0919 09:40:02.534473    2910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2051.pem
	I0919 09:40:02.535974    2910 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:35 /usr/share/ca-certificates/2051.pem
	I0919 09:40:02.535993    2910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2051.pem
	I0919 09:40:02.537932    2910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2051.pem /etc/ssl/certs/51391683.0"
	I0919 09:40:02.540881    2910 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 09:40:02.542241    2910 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 09:40:02.542270    2910 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:40:02.542327    2910 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 09:40:02.547726    2910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 09:40:02.550973    2910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 09:40:02.553933    2910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 09:40:02.556492    2910 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 09:40:02.556506    2910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 09:40:02.581688    2910 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0919 09:40:02.581789    2910 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 09:40:02.665388    2910 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 09:40:02.665442    2910 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 09:40:02.665493    2910 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 09:40:02.714829    2910 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 09:40:02.714875    2910 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 09:40:02.714901    2910 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 09:40:02.806246    2910 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 09:40:02.813440    2910 out.go:204]   - Generating certificates and keys ...
	I0919 09:40:02.813479    2910 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 09:40:02.813508    2910 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 09:40:02.878079    2910 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 09:40:03.018788    2910 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 09:40:03.122441    2910 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 09:40:03.326104    2910 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 09:40:03.427391    2910 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 09:40:03.427464    2910 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-969000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0919 09:40:03.496179    2910 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 09:40:03.496248    2910 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-969000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0919 09:40:03.603532    2910 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 09:40:03.715730    2910 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 09:40:03.902311    2910 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 09:40:03.902395    2910 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 09:40:03.969308    2910 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 09:40:04.099695    2910 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 09:40:04.284551    2910 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 09:40:04.338328    2910 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 09:40:04.338737    2910 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 09:40:04.342904    2910 out.go:204]   - Booting up control plane ...
	I0919 09:40:04.342961    2910 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 09:40:04.343014    2910 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 09:40:04.348507    2910 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 09:40:04.348552    2910 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 09:40:04.351344    2910 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 09:40:15.352784    2910 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.001460 seconds
	I0919 09:40:15.352853    2910 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 09:40:15.358052    2910 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 09:40:15.882751    2910 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 09:40:15.883042    2910 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-969000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 09:40:16.388163    2910 kubeadm.go:322] [bootstrap-token] Using token: n11xoy.e2yn9ijvwx8qwdr7
	I0919 09:40:16.394443    2910 out.go:204]   - Configuring RBAC rules ...
	I0919 09:40:16.394523    2910 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 09:40:16.394582    2910 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 09:40:16.400582    2910 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 09:40:16.401729    2910 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 09:40:16.402880    2910 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 09:40:16.404211    2910 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 09:40:16.407235    2910 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 09:40:16.611153    2910 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 09:40:16.803629    2910 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 09:40:16.804499    2910 kubeadm.go:322] 
	I0919 09:40:16.804559    2910 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 09:40:16.804574    2910 kubeadm.go:322] 
	I0919 09:40:16.804631    2910 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 09:40:16.804654    2910 kubeadm.go:322] 
	I0919 09:40:16.804675    2910 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 09:40:16.804718    2910 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 09:40:16.804769    2910 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 09:40:16.804776    2910 kubeadm.go:322] 
	I0919 09:40:16.804822    2910 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 09:40:16.804886    2910 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 09:40:16.804964    2910 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 09:40:16.804973    2910 kubeadm.go:322] 
	I0919 09:40:16.805039    2910 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 09:40:16.805122    2910 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 09:40:16.805130    2910 kubeadm.go:322] 
	I0919 09:40:16.805196    2910 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token n11xoy.e2yn9ijvwx8qwdr7 \
	I0919 09:40:16.805293    2910 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc \
	I0919 09:40:16.805320    2910 kubeadm.go:322]     --control-plane 
	I0919 09:40:16.805324    2910 kubeadm.go:322] 
	I0919 09:40:16.805401    2910 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 09:40:16.805408    2910 kubeadm.go:322] 
	I0919 09:40:16.805515    2910 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token n11xoy.e2yn9ijvwx8qwdr7 \
	I0919 09:40:16.805597    2910 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ca3cab74a9dc47dde0bf47a79e9f850e6b13ad8707fb3a16c62adcc7135054bc 
	I0919 09:40:16.805750    2910 kubeadm.go:322] W0919 16:40:02.583491    1408 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0919 09:40:16.805928    2910 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0919 09:40:16.806042    2910 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0919 09:40:16.806140    2910 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 09:40:16.806239    2910 kubeadm.go:322] W0919 16:40:04.347851    1408 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0919 09:40:16.806365    2910 kubeadm.go:322] W0919 16:40:04.348695    1408 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0919 09:40:16.806374    2910 cni.go:84] Creating CNI manager for ""
	I0919 09:40:16.806386    2910 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:40:16.806407    2910 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 09:40:16.806505    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:16.806505    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=ingress-addon-legacy-969000 minikube.k8s.io/updated_at=2023_09_19T09_40_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:16.810945    2910 ops.go:34] apiserver oom_adj: -16
	I0919 09:40:16.920347    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:16.955325    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:17.493344    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:17.993404    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:18.493392    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:18.993359    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:19.493474    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:19.993102    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:20.493392    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:20.992028    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:21.493335    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:21.993340    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:22.493246    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:22.992995    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:23.492963    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:23.992228    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:24.493352    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:24.993240    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:25.493093    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:25.993244    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:26.493217    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:26.993173    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:27.493252    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:27.993232    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:28.493043    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:28.993152    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:29.493167    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:29.993231    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:30.493165    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:30.993121    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:31.493083    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:31.992881    2910 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 09:40:32.049424    2910 kubeadm.go:1081] duration metric: took 15.243272625s to wait for elevateKubeSystemPrivileges.
	I0919 09:40:32.049438    2910 kubeadm.go:406] StartCluster complete in 29.507684s
	I0919 09:40:32.049448    2910 settings.go:142] acquiring lock: {Name:mk7316c4de97357fafef76bf7f58c3638d00d866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:32.049529    2910 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:40:32.049916    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/kubeconfig: {Name:mk0534d05ae1a49ed75724777911378ef3989658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:40:32.050138    2910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 09:40:32.050188    2910 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 09:40:32.050226    2910 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-969000"
	I0919 09:40:32.050233    2910 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-969000"
	I0919 09:40:32.050238    2910 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-969000"
	I0919 09:40:32.050245    2910 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-969000"
	I0919 09:40:32.050256    2910 host.go:66] Checking if "ingress-addon-legacy-969000" exists ...
	I0919 09:40:32.050439    2910 config.go:182] Loaded profile config "ingress-addon-legacy-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0919 09:40:32.050405    2910 kapi.go:59] client config for ingress-addon-legacy-969000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f8c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 09:40:32.050596    2910 host.go:54] host status for "ingress-addon-legacy-969000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/monitor: connect: connection refused
	W0919 09:40:32.050609    2910 addons.go:277] "ingress-addon-legacy-969000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0919 09:40:32.050790    2910 cert_rotation.go:137] Starting client certificate rotation controller
	I0919 09:40:32.051295    2910 kapi.go:59] client config for ingress-addon-legacy-969000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f8c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 09:40:32.058134    2910 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-969000"
	I0919 09:40:32.058153    2910 host.go:66] Checking if "ingress-addon-legacy-969000" exists ...
	I0919 09:40:32.058846    2910 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 09:40:32.058854    2910 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 09:40:32.058861    2910 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/ingress-addon-legacy-969000/id_rsa Username:docker}
	I0919 09:40:32.061884    2910 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-969000" context rescaled to 1 replicas
	I0919 09:40:32.061904    2910 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:40:32.065812    2910 out.go:177] * Verifying Kubernetes components...
	I0919 09:40:32.074066    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 09:40:32.095173    2910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 09:40:32.095461    2910 kapi.go:59] client config for ingress-addon-legacy-969000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.key", CAFile:"/Users/jenkins/minikube-integration/17240-943/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043f8c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 09:40:32.095592    2910 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-969000" to be "Ready" ...
	I0919 09:40:32.100421    2910 node_ready.go:49] node "ingress-addon-legacy-969000" has status "Ready":"True"
	I0919 09:40:32.100428    2910 node_ready.go:38] duration metric: took 4.826666ms waiting for node "ingress-addon-legacy-969000" to be "Ready" ...
	I0919 09:40:32.100432    2910 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 09:40:32.101087    2910 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 09:40:32.103932    2910 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:32.301985    2910 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0919 09:40:32.305780    2910 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 09:40:32.312703    2910 addons.go:502] enable addons completed in 262.520209ms: enabled=[storage-provisioner default-storageclass]
	I0919 09:40:33.113732    2910 pod_ready.go:92] pod "etcd-ingress-addon-legacy-969000" in "kube-system" namespace has status "Ready":"True"
	I0919 09:40:33.113744    2910 pod_ready.go:81] duration metric: took 1.00981525s waiting for pod "etcd-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.113750    2910 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.116530    2910 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-969000" in "kube-system" namespace has status "Ready":"True"
	I0919 09:40:33.116537    2910 pod_ready.go:81] duration metric: took 2.782625ms waiting for pod "kube-apiserver-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.116545    2910 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.119428    2910 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-969000" in "kube-system" namespace has status "Ready":"True"
	I0919 09:40:33.119434    2910 pod_ready.go:81] duration metric: took 2.884208ms waiting for pod "kube-controller-manager-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.119439    2910 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-25r6r" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.297742    2910 request.go:629] Waited for 177.07175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-969000
	I0919 09:40:33.303307    2910 pod_ready.go:92] pod "kube-proxy-25r6r" in "kube-system" namespace has status "Ready":"True"
	I0919 09:40:33.303334    2910 pod_ready.go:81] duration metric: took 183.891667ms waiting for pod "kube-proxy-25r6r" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.303346    2910 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.497747    2910 request.go:629] Waited for 194.305375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-969000
	I0919 09:40:33.697711    2910 request.go:629] Waited for 191.22025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-969000
	I0919 09:40:33.701989    2910 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-969000" in "kube-system" namespace has status "Ready":"True"
	I0919 09:40:33.702009    2910 pod_ready.go:81] duration metric: took 398.658791ms waiting for pod "kube-scheduler-ingress-addon-legacy-969000" in "kube-system" namespace to be "Ready" ...
	I0919 09:40:33.702024    2910 pod_ready.go:38] duration metric: took 1.601612792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 09:40:33.702128    2910 api_server.go:52] waiting for apiserver process to appear ...
	I0919 09:40:33.702379    2910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 09:40:33.715358    2910 api_server.go:72] duration metric: took 1.653446958s to wait for apiserver process to appear ...
	I0919 09:40:33.715376    2910 api_server.go:88] waiting for apiserver healthz status ...
	I0919 09:40:33.715392    2910 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0919 09:40:33.723163    2910 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0919 09:40:33.724161    2910 api_server.go:141] control plane version: v1.18.20
	I0919 09:40:33.724175    2910 api_server.go:131] duration metric: took 8.793542ms to wait for apiserver health ...
	I0919 09:40:33.724183    2910 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 09:40:33.897705    2910 request.go:629] Waited for 173.448083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0919 09:40:33.908606    2910 system_pods.go:59] 6 kube-system pods found
	I0919 09:40:33.908655    2910 system_pods.go:61] "coredns-66bff467f8-t5nnk" [f2a82d5b-937a-40fd-ac64-ab3d74fa1fff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 09:40:33.908669    2910 system_pods.go:61] "etcd-ingress-addon-legacy-969000" [91387d5a-129b-458a-ba6f-f0b9ed2574db] Running
	I0919 09:40:33.908683    2910 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-969000" [447cafbf-89d8-4183-aff3-90c1a7c9d8ba] Running
	I0919 09:40:33.908693    2910 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-969000" [4aaee89a-6e90-44b1-96c2-22e14a627f31] Running
	I0919 09:40:33.908705    2910 system_pods.go:61] "kube-proxy-25r6r" [fb0d2bb4-a747-4d57-92a4-8148e1e5fb70] Running
	I0919 09:40:33.908721    2910 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-969000" [86134dd1-5be8-4196-a935-354428a2b5d2] Running
	I0919 09:40:33.908734    2910 system_pods.go:74] duration metric: took 184.546625ms to wait for pod list to return data ...
	I0919 09:40:33.908750    2910 default_sa.go:34] waiting for default service account to be created ...
	I0919 09:40:34.096212    2910 request.go:629] Waited for 187.339375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0919 09:40:34.103452    2910 default_sa.go:45] found service account: "default"
	I0919 09:40:34.103490    2910 default_sa.go:55] duration metric: took 194.730875ms for default service account to be created ...
	I0919 09:40:34.103512    2910 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 09:40:34.297694    2910 request.go:629] Waited for 194.090542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0919 09:40:34.310930    2910 system_pods.go:86] 6 kube-system pods found
	I0919 09:40:34.310968    2910 system_pods.go:89] "coredns-66bff467f8-t5nnk" [f2a82d5b-937a-40fd-ac64-ab3d74fa1fff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 09:40:34.310982    2910 system_pods.go:89] "etcd-ingress-addon-legacy-969000" [91387d5a-129b-458a-ba6f-f0b9ed2574db] Running
	I0919 09:40:34.310998    2910 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-969000" [447cafbf-89d8-4183-aff3-90c1a7c9d8ba] Running
	I0919 09:40:34.311009    2910 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-969000" [4aaee89a-6e90-44b1-96c2-22e14a627f31] Running
	I0919 09:40:34.311022    2910 system_pods.go:89] "kube-proxy-25r6r" [fb0d2bb4-a747-4d57-92a4-8148e1e5fb70] Running
	I0919 09:40:34.311030    2910 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-969000" [86134dd1-5be8-4196-a935-354428a2b5d2] Running
	I0919 09:40:34.311053    2910 system_pods.go:126] duration metric: took 207.5305ms to wait for k8s-apps to be running ...
	I0919 09:40:34.311067    2910 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 09:40:34.311289    2910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 09:40:34.329232    2910 system_svc.go:56] duration metric: took 18.165334ms WaitForService to wait for kubelet.
	I0919 09:40:34.329249    2910 kubeadm.go:581] duration metric: took 2.267365916s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 09:40:34.329271    2910 node_conditions.go:102] verifying NodePressure condition ...
	I0919 09:40:34.497715    2910 request.go:629] Waited for 168.36275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0919 09:40:34.506303    2910 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0919 09:40:34.506358    2910 node_conditions.go:123] node cpu capacity is 2
	I0919 09:40:34.506387    2910 node_conditions.go:105] duration metric: took 177.106167ms to run NodePressure ...
	I0919 09:40:34.506417    2910 start.go:228] waiting for startup goroutines ...
	I0919 09:40:34.506437    2910 start.go:233] waiting for cluster config update ...
	I0919 09:40:34.506480    2910 start.go:242] writing updated cluster config ...
	I0919 09:40:34.507691    2910 ssh_runner.go:195] Run: rm -f paused
	I0919 09:40:34.572945    2910 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0919 09:40:34.576903    2910 out.go:177] 
	W0919 09:40:34.579561    2910 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0919 09:40:34.583584    2910 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0919 09:40:34.591598    2910 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-969000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 16:39:49 UTC, ends at Tue 2023-09-19 16:41:40 UTC. --
	Sep 19 16:41:16 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:16.669862183Z" level=info msg="ignoring event" container=4cab749b813ed70d620241a97e84734de8814ef54e1ea87e2a46ac30ce16a0fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:41:16 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:16.670114779Z" level=warning msg="cleaning up after shim disconnected" id=4cab749b813ed70d620241a97e84734de8814ef54e1ea87e2a46ac30ce16a0fc namespace=moby
	Sep 19 16:41:16 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:16.670125571Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:41:31 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:31.077436730Z" level=info msg="ignoring event" container=f7af7ed3b8cb2a25b651a78e50ae888af95a75ca2be18e637591334be2e85248 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:41:31 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:31.077881534Z" level=info msg="shim disconnected" id=f7af7ed3b8cb2a25b651a78e50ae888af95a75ca2be18e637591334be2e85248 namespace=moby
	Sep 19 16:41:31 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:31.077933994Z" level=warning msg="cleaning up after shim disconnected" id=f7af7ed3b8cb2a25b651a78e50ae888af95a75ca2be18e637591334be2e85248 namespace=moby
	Sep 19 16:41:31 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:31.077942744Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.090511695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.090925831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.090939998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.090948249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:32.130526514Z" level=info msg="ignoring event" container=9379358885cb043a813c8602e35f8a68f7e11fed76488fc00932f66377e14d10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.130865107Z" level=info msg="shim disconnected" id=9379358885cb043a813c8602e35f8a68f7e11fed76488fc00932f66377e14d10 namespace=moby
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.130904774Z" level=warning msg="cleaning up after shim disconnected" id=9379358885cb043a813c8602e35f8a68f7e11fed76488fc00932f66377e14d10 namespace=moby
	Sep 19 16:41:32 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:32.130910025Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:35.527277445Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=91aed32254ca6f573cfb1c221ca2f2a730ad2d8f48f9721cc9fcb4355d19ca24
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:35.539920632Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=91aed32254ca6f573cfb1c221ca2f2a730ad2d8f48f9721cc9fcb4355d19ca24
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.614808019Z" level=info msg="shim disconnected" id=91aed32254ca6f573cfb1c221ca2f2a730ad2d8f48f9721cc9fcb4355d19ca24 namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.614918022Z" level=warning msg="cleaning up after shim disconnected" id=91aed32254ca6f573cfb1c221ca2f2a730ad2d8f48f9721cc9fcb4355d19ca24 namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.614930397Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:35.615886379Z" level=info msg="ignoring event" container=91aed32254ca6f573cfb1c221ca2f2a730ad2d8f48f9721cc9fcb4355d19ca24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1084]: time="2023-09-19T16:41:35.658255840Z" level=info msg="ignoring event" container=b9e70c662ee03c4058f37e3650da032f8f51dd7b3e3fcfc802af35962929c860 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.658510596Z" level=info msg="shim disconnected" id=b9e70c662ee03c4058f37e3650da032f8f51dd7b3e3fcfc802af35962929c860 namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.658547639Z" level=warning msg="cleaning up after shim disconnected" id=b9e70c662ee03c4058f37e3650da032f8f51dd7b3e3fcfc802af35962929c860 namespace=moby
	Sep 19 16:41:35 ingress-addon-legacy-969000 dockerd[1090]: time="2023-09-19T16:41:35.658553681Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	9379358885cb   a39a07419475                               "/hello-app"             8 seconds ago        Exited (1) 8 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-xsj8c_default_b758cf4f-5c65-4191-b0f8-4f74c45193e6_2
	ddb65fdcc650   k8s.gcr.io/pause:3.2                       "/pause"                 26 seconds ago       Up 26 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-xsj8c_default_b758cf4f-5c65-4191-b0f8-4f74c45193e6_0
	8145b452c424   nginx                                      "/docker-entrypoint.…"   33 seconds ago       Up 32 seconds                             k8s_nginx_nginx_default_9a149ea8-b235-4754-a3f7-00063afa2ad3_0
	018015ab4ff1   k8s.gcr.io/pause:3.2                       "/pause"                 36 seconds ago       Up 35 seconds                             k8s_POD_nginx_default_9a149ea8-b235-4754-a3f7-00063afa2ad3_0
	f7af7ed3b8cb   k8s.gcr.io/pause:3.2                       "/pause"                 47 seconds ago       Exited (137) 9 seconds ago                k8s_POD_kube-ingress-dns-minikube_kube-system_9b795b36-fb0f-47d1-94fc-14f0b701dd18_0
	91aed32254ca   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   49 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-98f5j_ingress-nginx_e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf_0
	b9e70c662ee0   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-98f5j_ingress-nginx_e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf_0
	0465bee92525   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-6prnn_ingress-nginx_fee912a5-12e1-4db8-a828-b7fcab244d31_0
	ed610ab48fde   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-s5zgg_ingress-nginx_6f0c16b4-526d-4e6c-9883-ea342ade1a4b_0
	c6fc0d7e453c   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-6prnn_ingress-nginx_fee912a5-12e1-4db8-a828-b7fcab244d31_0
	99b3dd5d8faf   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-s5zgg_ingress-nginx_6f0c16b4-526d-4e6c-9883-ea342ade1a4b_0
	f008a98475e0   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-t5nnk_kube-system_f2a82d5b-937a-40fd-ac64-ab3d74fa1fff_0
	49e1a89bb41d   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-t5nnk_kube-system_f2a82d5b-937a-40fd-ac64-ab3d74fa1fff_0
	291ef064b97d   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-25r6r_kube-system_fb0d2bb4-a747-4d57-92a4-8148e1e5fb70_0
	ab114b088c5e   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-25r6r_kube-system_fb0d2bb4-a747-4d57-92a4-8148e1e5fb70_0
	83da6f593801   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-969000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	7adc72d18ccb   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-969000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	d930c78397b2   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-969000_kube-system_d6b726c03ddfaeaf94cccf1c2d2537ce_0
	e8aa1bf29be5   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-969000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	c59ff3941603   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-969000_kube-system_d6b726c03ddfaeaf94cccf1c2d2537ce_0
	92e4758d24e5   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-969000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	1bbca2c9b959   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-969000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	00a568694c4b   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-969000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	time="2023-09-19T16:41:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [f008a98475e0] <==
	* [INFO] 172.17.0.1:12440 - 64849 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027626s
	[INFO] 172.17.0.1:12440 - 29620 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024668s
	[INFO] 172.17.0.1:12440 - 15684 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023335s
	[INFO] 172.17.0.1:12440 - 42583 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034918s
	[INFO] 172.17.0.1:37062 - 50914 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013709s
	[INFO] 172.17.0.1:37062 - 40421 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013542s
	[INFO] 172.17.0.1:37062 - 1168 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000019085s
	[INFO] 172.17.0.1:37062 - 11420 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012876s
	[INFO] 172.17.0.1:37062 - 62498 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012167s
	[INFO] 172.17.0.1:37062 - 33763 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012375s
	[INFO] 172.17.0.1:37062 - 44579 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013584s
	[INFO] 172.17.0.1:52121 - 2891 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013876s
	[INFO] 172.17.0.1:21509 - 58585 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013501s
	[INFO] 172.17.0.1:21509 - 2837 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014s
	[INFO] 172.17.0.1:21509 - 49660 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010084s
	[INFO] 172.17.0.1:52121 - 63132 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020751s
	[INFO] 172.17.0.1:21509 - 20384 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013459s
	[INFO] 172.17.0.1:52121 - 48679 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013417s
	[INFO] 172.17.0.1:21509 - 33982 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023292s
	[INFO] 172.17.0.1:52121 - 5268 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008168s
	[INFO] 172.17.0.1:21509 - 54854 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008959s
	[INFO] 172.17.0.1:52121 - 29360 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033585s
	[INFO] 172.17.0.1:21509 - 41271 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000008918s
	[INFO] 172.17.0.1:52121 - 49648 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022459s
	[INFO] 172.17.0.1:52121 - 63069 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002321s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-969000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-969000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=ingress-addon-legacy-969000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T09_40_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-969000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:41:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:41:23 +0000   Tue, 19 Sep 2023 16:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:41:23 +0000   Tue, 19 Sep 2023 16:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:41:23 +0000   Tue, 19 Sep 2023 16:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:41:23 +0000   Tue, 19 Sep 2023 16:40:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-969000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 55dfcd5557db42e1a8010d47d4678ff9
	  System UUID:                55dfcd5557db42e1a8010d47d4678ff9
	  Boot ID:                    0b45fb6a-dd65-463c-9442-777e646177a3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-xsj8c                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-66bff467f8-t5nnk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     68s
	  kube-system                 etcd-ingress-addon-legacy-969000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-ingress-addon-legacy-969000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-969000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-25r6r                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-ingress-addon-legacy-969000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s   kubelet     Node ingress-addon-legacy-969000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet     Node ingress-addon-legacy-969000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet     Node ingress-addon-legacy-969000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s   kubelet     Node ingress-addon-legacy-969000 status is now: NodeReady
	  Normal  Starting                 68s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep19 16:39] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.670371] EINJ: EINJ table not found.
	[  +0.527650] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044647] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000826] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.190954] systemd-fstab-generator[478]: Ignoring "noauto" for root device
	[  +0.084936] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.465098] systemd-fstab-generator[795]: Ignoring "noauto" for root device
	[  +0.186615] systemd-fstab-generator[830]: Ignoring "noauto" for root device
	[  +0.076845] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +0.087177] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +4.305169] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +1.249152] kauditd_printk_skb: 53 callbacks suppressed
	[Sep19 16:40] systemd-fstab-generator[1525]: Ignoring "noauto" for root device
	[  +7.830674] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.071436] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.790492] systemd-fstab-generator[2612]: Ignoring "noauto" for root device
	[ +16.080944] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.647647] kauditd_printk_skb: 15 callbacks suppressed
	[  +1.266851] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep19 16:41] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [d930c78397b2] <==
	* raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/19 16:40:12 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-19 16:40:12.039111 W | auth: simple token is not cryptographically signed
	2023-09-19 16:40:12.040191 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-19 16:40:12.042034 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 16:40:12.050901 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-19 16:40:12.050959 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-19 16:40:12.051035 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-19 16:40:12.051111 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/19 16:40:12 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/19 16:40:12 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-19 16:40:12.638112 I | etcdserver: published {Name:ingress-addon-legacy-969000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-19 16:40:12.638670 I | embed: ready to serve client requests
	2023-09-19 16:40:12.639209 I | embed: ready to serve client requests
	2023-09-19 16:40:12.642666 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 16:40:12.643692 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-19 16:40:12.643931 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-19 16:40:12.644791 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-19 16:40:12.644917 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  16:41:40 up 1 min,  0 users,  load average: 0.87, 0.34, 0.13
	Linux ingress-addon-legacy-969000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [83da6f593801] <==
	* I0919 16:40:14.146238       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0919 16:40:14.151866       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0919 16:40:14.230680       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 16:40:14.230694       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 16:40:14.230958       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0919 16:40:14.237666       1 cache.go:39] Caches are synced for autoregister controller
	I0919 16:40:14.250449       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0919 16:40:15.129751       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0919 16:40:15.129818       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 16:40:15.142890       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0919 16:40:15.163485       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0919 16:40:15.163626       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0919 16:40:15.285487       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:40:15.296798       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0919 16:40:15.395327       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0919 16:40:15.395780       1 controller.go:609] quota admission added evaluator for: endpoints
	I0919 16:40:15.397790       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:40:16.425563       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0919 16:40:16.606426       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0919 16:40:16.799336       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0919 16:40:22.965584       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 16:40:32.091470       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0919 16:40:32.280119       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0919 16:40:34.989832       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0919 16:41:04.487940       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [e8aa1bf29be5] <==
	* I0919 16:40:32.224484       1 shared_informer.go:230] Caches are synced for endpoint 
	I0919 16:40:32.239398       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0919 16:40:32.277753       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I0919 16:40:32.278817       1 shared_informer.go:230] Caches are synced for deployment 
	I0919 16:40:32.279497       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I0919 16:40:32.283627       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0193cba1-700e-4d7d-8338-0e6120604d45", APIVersion:"apps/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0919 16:40:32.291063       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"9a218bfa-1a38-47ff-8eeb-f7c36cd2276c", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-t5nnk
	I0919 16:40:32.366663       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0919 16:40:32.367143       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0919 16:40:32.384076       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0919 16:40:32.434719       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0919 16:40:32.434730       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 16:40:32.455213       1 shared_informer.go:230] Caches are synced for disruption 
	I0919 16:40:32.455240       1 disruption.go:339] Sending events to api server.
	I0919 16:40:32.476385       1 shared_informer.go:230] Caches are synced for resource quota 
	I0919 16:40:32.478543       1 shared_informer.go:230] Caches are synced for resource quota 
	I0919 16:40:34.985274       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e44f4f90-b876-42e5-bfbc-ce17e7f0d547", APIVersion:"apps/v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0919 16:40:34.993799       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"4351ddf0-11fc-48ad-91f5-955c9339bcfa", APIVersion:"apps/v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-98f5j
	I0919 16:40:35.006313       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"53a5142f-842a-4808-889a-39eab799990e", APIVersion:"batch/v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-s5zgg
	I0919 16:40:35.033847       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b0393253-4e59-403b-920a-6c97829dbeb6", APIVersion:"batch/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6prnn
	I0919 16:40:38.156350       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b0393253-4e59-403b-920a-6c97829dbeb6", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0919 16:40:38.176057       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"53a5142f-842a-4808-889a-39eab799990e", APIVersion:"batch/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0919 16:41:13.765366       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e1518111-82a5-43e6-9ffe-a0c9fcf1754d", APIVersion:"apps/v1", ResourceVersion:"511", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0919 16:41:13.770188       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"785dbea9-a8f7-4345-944f-2c957cbd4297", APIVersion:"apps/v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-xsj8c
	E0919 16:41:38.283859       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-j2wcl" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [291ef064b97d] <==
	* W0919 16:40:32.595895       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0919 16:40:32.600140       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0919 16:40:32.600166       1 server_others.go:186] Using iptables Proxier.
	I0919 16:40:32.600445       1 server.go:583] Version: v1.18.20
	I0919 16:40:32.600970       1 config.go:133] Starting endpoints config controller
	I0919 16:40:32.600989       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0919 16:40:32.601012       1 config.go:315] Starting service config controller
	I0919 16:40:32.601019       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0919 16:40:32.701156       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0919 16:40:32.701208       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [7adc72d18ccb] <==
	* W0919 16:40:14.153028       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 16:40:14.178424       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0919 16:40:14.178512       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0919 16:40:14.179563       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0919 16:40:14.179668       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:40:14.179700       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:40:14.179745       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0919 16:40:14.183427       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:40:14.183681       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:40:14.183872       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:40:14.183942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 16:40:14.184012       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:40:14.184075       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:40:14.184138       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:40:14.184194       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 16:40:14.184251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:40:14.184367       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:40:14.185230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:40:14.185312       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:40:15.026512       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:40:15.072036       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:40:15.095217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:40:15.157602       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:40:15.173235       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0919 16:40:15.879995       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:39:49 UTC, ends at Tue 2023-09-19 16:41:40 UTC. --
	Sep 19 16:41:18 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:18.628105    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4cab749b813ed70d620241a97e84734de8814ef54e1ea87e2a46ac30ce16a0fc
	Sep 19 16:41:18 ingress-addon-legacy-969000 kubelet[2618]: E0919 16:41:18.628520    2618 pod_workers.go:191] Error syncing pod b758cf4f-5c65-4191-b0f8-4f74c45193e6 ("hello-world-app-5f5d8b66bb-xsj8c_default(b758cf4f-5c65-4191-b0f8-4f74c45193e6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-xsj8c_default(b758cf4f-5c65-4191-b0f8-4f74c45193e6)"
	Sep 19 16:41:27 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:27.032000    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 669f07ee7a6916ef1544249c1b6bb8823ce9e13160bc2feb606ea76f4958ac89
	Sep 19 16:41:27 ingress-addon-legacy-969000 kubelet[2618]: E0919 16:41:27.034888    2618 pod_workers.go:191] Error syncing pod 9b795b36-fb0f-47d1-94fc-14f0b701dd18 ("kube-ingress-dns-minikube_kube-system(9b795b36-fb0f-47d1-94fc-14f0b701dd18)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(9b795b36-fb0f-47d1-94fc-14f0b701dd18)"
	Sep 19 16:41:29 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:29.243081    2618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-c8tcd" (UniqueName: "kubernetes.io/secret/9b795b36-fb0f-47d1-94fc-14f0b701dd18-minikube-ingress-dns-token-c8tcd") pod "9b795b36-fb0f-47d1-94fc-14f0b701dd18" (UID: "9b795b36-fb0f-47d1-94fc-14f0b701dd18")
	Sep 19 16:41:29 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:29.245658    2618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b795b36-fb0f-47d1-94fc-14f0b701dd18-minikube-ingress-dns-token-c8tcd" (OuterVolumeSpecName: "minikube-ingress-dns-token-c8tcd") pod "9b795b36-fb0f-47d1-94fc-14f0b701dd18" (UID: "9b795b36-fb0f-47d1-94fc-14f0b701dd18"). InnerVolumeSpecName "minikube-ingress-dns-token-c8tcd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:41:29 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:29.345203    2618 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-c8tcd" (UniqueName: "kubernetes.io/secret/9b795b36-fb0f-47d1-94fc-14f0b701dd18-minikube-ingress-dns-token-c8tcd") on node "ingress-addon-legacy-969000" DevicePath ""
	Sep 19 16:41:31 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:31.827727    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 669f07ee7a6916ef1544249c1b6bb8823ce9e13160bc2feb606ea76f4958ac89
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:32.031435    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4cab749b813ed70d620241a97e84734de8814ef54e1ea87e2a46ac30ce16a0fc
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: W0919 16:41:32.143377    2618 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podb758cf4f-5c65-4191-b0f8-4f74c45193e6/9379358885cb043a813c8602e35f8a68f7e11fed76488fc00932f66377e14d10": none of the resources are being tracked.
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: W0919 16:41:32.835872    2618 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-xsj8c through plugin: invalid network status for
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:32.838352    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4cab749b813ed70d620241a97e84734de8814ef54e1ea87e2a46ac30ce16a0fc
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:32.838563    2618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9379358885cb043a813c8602e35f8a68f7e11fed76488fc00932f66377e14d10
	Sep 19 16:41:32 ingress-addon-legacy-969000 kubelet[2618]: E0919 16:41:32.838715    2618 pod_workers.go:191] Error syncing pod b758cf4f-5c65-4191-b0f8-4f74c45193e6 ("hello-world-app-5f5d8b66bb-xsj8c_default(b758cf4f-5c65-4191-b0f8-4f74c45193e6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-xsj8c_default(b758cf4f-5c65-4191-b0f8-4f74c45193e6)"
	Sep 19 16:41:33 ingress-addon-legacy-969000 kubelet[2618]: E0919 16:41:33.520476    2618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-98f5j.17865a6395a09a8d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-98f5j", UID:"e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf", APIVersion:"v1", ResourceVersion:"396", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-969000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a91af5e68788d, ext:76936000490, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a91af5e68788d, ext:76936000490, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-98f5j.17865a6395a09a8d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 19 16:41:33 ingress-addon-legacy-969000 kubelet[2618]: E0919 16:41:33.535423    2618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-98f5j.17865a6395a09a8d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-98f5j", UID:"e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf", APIVersion:"v1", ResourceVersion:"396", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-969000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a91af5e68788d, ext:76936000490, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a91af5f5a2cff, ext:76951840860, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-98f5j.17865a6395a09a8d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 19 16:41:33 ingress-addon-legacy-969000 kubelet[2618]: W0919 16:41:33.850250    2618 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-xsj8c through plugin: invalid network status for
	Sep 19 16:41:35 ingress-addon-legacy-969000 kubelet[2618]: W0919 16:41:35.871585    2618 pod_container_deletor.go:77] Container "b9e70c662ee03c4058f37e3650da032f8f51dd7b3e3fcfc802af35962929c860" not found in pod's containers
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.751195    2618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6tn6v" (UniqueName: "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-ingress-nginx-token-6tn6v") pod "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf" (UID: "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf")
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.752277    2618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-webhook-cert") pod "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf" (UID: "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf")
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.762212    2618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-ingress-nginx-token-6tn6v" (OuterVolumeSpecName: "ingress-nginx-token-6tn6v") pod "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf" (UID: "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf"). InnerVolumeSpecName "ingress-nginx-token-6tn6v". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.763008    2618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf" (UID: "e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.852989    2618 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6tn6v" (UniqueName: "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-ingress-nginx-token-6tn6v") on node "ingress-addon-legacy-969000" DevicePath ""
	Sep 19 16:41:37 ingress-addon-legacy-969000 kubelet[2618]: I0919 16:41:37.853088    2618 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf-webhook-cert") on node "ingress-addon-legacy-969000" DevicePath ""
	Sep 19 16:41:39 ingress-addon-legacy-969000 kubelet[2618]: W0919 16:41:39.052815    2618 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/e1b0c147-03a5-49bf-9aba-a7ecbb49a8bf/volumes" does not exist
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-969000 -n ingress-addon-legacy-969000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-969000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (48.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-091000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-091000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.152072125s)

                                                
                                                
-- stdout --
	* [mount-start-1-091000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-091000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-091000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-091000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-091000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-091000 -n mount-start-1-091000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-091000 -n mount-start-1-091000: exit status 7 (64.998041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-091000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-120000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-120000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.808857625s)

                                                
                                                
-- stdout --
	* [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-120000 in cluster multinode-120000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-120000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:43:46.758567    3219 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:43:46.758699    3219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:43:46.758704    3219 out.go:309] Setting ErrFile to fd 2...
	I0919 09:43:46.758707    3219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:43:46.758854    3219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:43:46.759857    3219 out.go:303] Setting JSON to false
	I0919 09:43:46.775253    3219 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":800,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:43:46.775342    3219 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:43:46.779618    3219 out.go:177] * [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:43:46.786471    3219 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:43:46.786537    3219 notify.go:220] Checking for updates...
	I0919 09:43:46.790542    3219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:43:46.793578    3219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:43:46.796495    3219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:43:46.799533    3219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:43:46.802558    3219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:43:46.805717    3219 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:43:46.809461    3219 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:43:46.816457    3219 start.go:298] selected driver: qemu2
	I0919 09:43:46.816464    3219 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:43:46.816470    3219 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:43:46.818456    3219 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:43:46.821544    3219 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:43:46.824603    3219 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:43:46.824622    3219 cni.go:84] Creating CNI manager for ""
	I0919 09:43:46.824627    3219 cni.go:136] 0 nodes found, recommending kindnet
	I0919 09:43:46.824631    3219 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 09:43:46.824635    3219 start_flags.go:321] config:
	{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0919 09:43:46.828961    3219 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:43:46.836486    3219 out.go:177] * Starting control plane node multinode-120000 in cluster multinode-120000
	I0919 09:43:46.840580    3219 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:43:46.840598    3219 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:43:46.840606    3219 cache.go:57] Caching tarball of preloaded images
	I0919 09:43:46.840670    3219 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:43:46.840677    3219 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:43:46.840877    3219 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/multinode-120000/config.json ...
	I0919 09:43:46.840892    3219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/multinode-120000/config.json: {Name:mkff0fe401770a1c62773f1f8c38b6f4b7d98eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:43:46.841103    3219 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:43:46.841132    3219 start.go:369] acquired machines lock for "multinode-120000" in 23.583µs
	I0919 09:43:46.841144    3219 start.go:93] Provisioning new machine with config: &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:43:46.841172    3219 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:43:46.848562    3219 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:43:46.864695    3219 start.go:159] libmachine.API.Create for "multinode-120000" (driver="qemu2")
	I0919 09:43:46.864718    3219 client.go:168] LocalClient.Create starting
	I0919 09:43:46.864783    3219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:43:46.864813    3219 main.go:141] libmachine: Decoding PEM data...
	I0919 09:43:46.864830    3219 main.go:141] libmachine: Parsing certificate...
	I0919 09:43:46.864872    3219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:43:46.864893    3219 main.go:141] libmachine: Decoding PEM data...
	I0919 09:43:46.864900    3219 main.go:141] libmachine: Parsing certificate...
	I0919 09:43:46.865231    3219 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:43:46.979362    3219 main.go:141] libmachine: Creating SSH key...
	I0919 09:43:47.089758    3219 main.go:141] libmachine: Creating Disk image...
	I0919 09:43:47.089765    3219 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:43:47.089913    3219 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:47.098310    3219 main.go:141] libmachine: STDOUT: 
	I0919 09:43:47.098325    3219 main.go:141] libmachine: STDERR: 
	I0919 09:43:47.098372    3219 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2 +20000M
	I0919 09:43:47.105555    3219 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:43:47.105566    3219 main.go:141] libmachine: STDERR: 
	I0919 09:43:47.105583    3219 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:47.105591    3219 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:43:47.105626    3219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:70:3b:2b:3d:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:47.107161    3219 main.go:141] libmachine: STDOUT: 
	I0919 09:43:47.107175    3219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:43:47.107193    3219 client.go:171] LocalClient.Create took 242.47425ms
	I0919 09:43:49.109470    3219 start.go:128] duration metric: createHost completed in 2.26827275s
	I0919 09:43:49.109562    3219 start.go:83] releasing machines lock for "multinode-120000", held for 2.268459s
	W0919 09:43:49.109621    3219 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:43:49.116908    3219 out.go:177] * Deleting "multinode-120000" in qemu2 ...
	W0919 09:43:49.136510    3219 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:43:49.136541    3219 start.go:703] Will try again in 5 seconds ...
	I0919 09:43:54.138754    3219 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:43:54.139275    3219 start.go:369] acquired machines lock for "multinode-120000" in 414.083µs
	I0919 09:43:54.139411    3219 start.go:93] Provisioning new machine with config: &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:43:54.139679    3219 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:43:54.146495    3219 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:43:54.193420    3219 start.go:159] libmachine.API.Create for "multinode-120000" (driver="qemu2")
	I0919 09:43:54.193450    3219 client.go:168] LocalClient.Create starting
	I0919 09:43:54.193551    3219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:43:54.193595    3219 main.go:141] libmachine: Decoding PEM data...
	I0919 09:43:54.193617    3219 main.go:141] libmachine: Parsing certificate...
	I0919 09:43:54.193686    3219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:43:54.193763    3219 main.go:141] libmachine: Decoding PEM data...
	I0919 09:43:54.193774    3219 main.go:141] libmachine: Parsing certificate...
	I0919 09:43:54.194234    3219 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:43:54.335636    3219 main.go:141] libmachine: Creating SSH key...
	I0919 09:43:54.480872    3219 main.go:141] libmachine: Creating Disk image...
	I0919 09:43:54.480878    3219 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:43:54.481027    3219 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:54.489877    3219 main.go:141] libmachine: STDOUT: 
	I0919 09:43:54.489891    3219 main.go:141] libmachine: STDERR: 
	I0919 09:43:54.489941    3219 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2 +20000M
	I0919 09:43:54.497193    3219 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:43:54.497205    3219 main.go:141] libmachine: STDERR: 
	I0919 09:43:54.497218    3219 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:54.497225    3219 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:43:54.497261    3219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c0:2e:d9:9a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:43:54.498830    3219 main.go:141] libmachine: STDOUT: 
	I0919 09:43:54.498842    3219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:43:54.498855    3219 client.go:171] LocalClient.Create took 305.405334ms
	I0919 09:43:56.501000    3219 start.go:128] duration metric: createHost completed in 2.361327041s
	I0919 09:43:56.501060    3219 start.go:83] releasing machines lock for "multinode-120000", held for 2.361796917s
	W0919 09:43:56.501574    3219 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:43:56.513048    3219 out.go:177] 
	W0919 09:43:56.517170    3219 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:43:56.517194    3219 out.go:239] * 
	* 
	W0919 09:43:56.519920    3219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:43:56.529081    3219 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-120000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (63.727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (124.264958ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-120000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- rollout status deployment/busybox: exit status 1 (54.773125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.208292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.867208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.3855ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.72125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.481125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.157583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0919 09:44:06.334313    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.737917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.378417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.920125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.081958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0919 09:45:28.255378    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.69875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.704167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.053125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.default: exit status 1 (52.809416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (52.846125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.821125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.281083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.293291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-120000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-120000 -v 3 --alsologtostderr: exit status 89 (39.132417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-120000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:45.978145    3317 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:45.978337    3317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:45.978340    3317 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:45.978342    3317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:45.978481    3317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:45.978694    3317 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:45.978882    3317 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:45.983012    3317 out.go:177] * The control plane node must be running for this command
	I0919 09:45:45.987144    3317 out.go:177]   To start a cluster, run: "minikube start -p multinode-120000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-120000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.419125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-120000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-120000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-120000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-120000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.730375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status --output json --alsologtostderr: exit status 7 (27.531709ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-120000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:46.142823    3327 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:46.142984    3327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.142987    3327 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:46.142990    3327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.143147    3327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:46.143268    3327 out.go:303] Setting JSON to true
	I0919 09:45:46.143280    3327 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:46.143340    3327 notify.go:220] Checking for updates...
	I0919 09:45:46.143504    3327 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:46.143508    3327 status.go:255] checking status of multinode-120000 ...
	I0919 09:45:46.143718    3327 status.go:330] multinode-120000 host status = "Stopped" (err=<nil>)
	I0919 09:45:46.143721    3327 status.go:343] host is not running, skipping remaining checks
	I0919 09:45:46.143723    3327 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-120000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.665625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 node stop m03: exit status 85 (44.811166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-120000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status: exit status 7 (27.341083ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr: exit status 7 (26.93875ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:46.270418    3335 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:46.270584    3335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.270587    3335 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:46.270589    3335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.270733    3335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:46.270865    3335 out.go:303] Setting JSON to false
	I0919 09:45:46.270877    3335 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:46.270925    3335 notify.go:220] Checking for updates...
	I0919 09:45:46.271085    3335 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:46.271090    3335 status.go:255] checking status of multinode-120000 ...
	I0919 09:45:46.271303    3335 status.go:330] multinode-120000 host status = "Stopped" (err=<nil>)
	I0919 09:45:46.271306    3335 status.go:343] host is not running, skipping remaining checks
	I0919 09:45:46.271308    3335 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr": multinode-120000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.1185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 node start m03 --alsologtostderr: exit status 85 (44.060917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:46.325275    3339 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:46.325491    3339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.325494    3339 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:46.325497    3339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.325640    3339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:46.325893    3339 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:46.326105    3339 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:46.330849    3339 out.go:177] 
	W0919 09:45:46.333862    3339 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0919 09:45:46.333867    3339 out.go:239] * 
	* 
	W0919 09:45:46.335460    3339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:45:46.338820    3339 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0919 09:45:46.325275    3339 out.go:296] Setting OutFile to fd 1 ...
I0919 09:45:46.325491    3339 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:45:46.325494    3339 out.go:309] Setting ErrFile to fd 2...
I0919 09:45:46.325497    3339 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:45:46.325640    3339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:45:46.325893    3339 mustload.go:65] Loading cluster: multinode-120000
I0919 09:45:46.326105    3339 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:45:46.330849    3339 out.go:177] 
W0919 09:45:46.333862    3339 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0919 09:45:46.333867    3339 out.go:239] * 
* 
W0919 09:45:46.335460    3339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 09:45:46.338820    3339 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-120000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status: exit status 7 (27.385667ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-120000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.276792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-120000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-120000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.168501625s)

                                                
                                                
-- stdout --
	* [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-120000 in cluster multinode-120000
	* Restarting existing qemu2 VM for "multinode-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:46.510598    3349 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:46.510742    3349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.510745    3349 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:46.510747    3349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:46.510894    3349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:46.511899    3349 out.go:303] Setting JSON to false
	I0919 09:45:46.526949    3349 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":920,"bootTime":1695141026,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:45:46.527011    3349 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:45:46.531857    3349 out.go:177] * [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:45:46.538829    3349 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:45:46.538857    3349 notify.go:220] Checking for updates...
	I0919 09:45:46.541847    3349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:45:46.544842    3349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:45:46.547866    3349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:45:46.549266    3349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:45:46.552808    3349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:45:46.556128    3349 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:46.556183    3349 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:45:46.560636    3349 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:45:46.567864    3349 start.go:298] selected driver: qemu2
	I0919 09:45:46.567872    3349 start.go:902] validating driver "qemu2" against &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:45:46.567931    3349 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:45:46.569882    3349 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:45:46.569906    3349 cni.go:84] Creating CNI manager for ""
	I0919 09:45:46.569910    3349 cni.go:136] 1 nodes found, recommending kindnet
	I0919 09:45:46.569916    3349 start_flags.go:321] config:
	{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:45:46.573659    3349 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:45:46.580817    3349 out.go:177] * Starting control plane node multinode-120000 in cluster multinode-120000
	I0919 09:45:46.584827    3349 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:45:46.584847    3349 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:45:46.584855    3349 cache.go:57] Caching tarball of preloaded images
	I0919 09:45:46.584914    3349 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:45:46.584919    3349 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:45:46.584992    3349 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/multinode-120000/config.json ...
	I0919 09:45:46.585353    3349 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:45:46.585383    3349 start.go:369] acquired machines lock for "multinode-120000" in 24.542µs
	I0919 09:45:46.585395    3349 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:45:46.585399    3349 fix.go:54] fixHost starting: 
	I0919 09:45:46.585514    3349 fix.go:102] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	W0919 09:45:46.585522    3349 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:45:46.589815    3349 out.go:177] * Restarting existing qemu2 VM for "multinode-120000" ...
	I0919 09:45:46.593894    3349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c0:2e:d9:9a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:45:46.595618    3349 main.go:141] libmachine: STDOUT: 
	I0919 09:45:46.595633    3349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:45:46.595657    3349 fix.go:56] fixHost completed within 10.259292ms
	I0919 09:45:46.595662    3349 start.go:83] releasing machines lock for "multinode-120000", held for 10.272875ms
	W0919 09:45:46.595674    3349 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:45:46.595715    3349 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:45:46.595719    3349 start.go:703] Will try again in 5 seconds ...
	I0919 09:45:51.597842    3349 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:45:51.598167    3349 start.go:369] acquired machines lock for "multinode-120000" in 256.667µs
	I0919 09:45:51.598296    3349 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:45:51.598312    3349 fix.go:54] fixHost starting: 
	I0919 09:45:51.599013    3349 fix.go:102] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	W0919 09:45:51.599041    3349 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:45:51.602585    3349 out.go:177] * Restarting existing qemu2 VM for "multinode-120000" ...
	I0919 09:45:51.606717    3349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c0:2e:d9:9a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:45:51.615080    3349 main.go:141] libmachine: STDOUT: 
	I0919 09:45:51.615140    3349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:45:51.615206    3349 fix.go:56] fixHost completed within 16.893667ms
	I0919 09:45:51.615231    3349 start.go:83] releasing machines lock for "multinode-120000", held for 17.044916ms
	W0919 09:45:51.615463    3349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:45:51.624471    3349 out.go:177] 
	W0919 09:45:51.628504    3349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:45:51.628562    3349 out.go:239] * 
	* 
	W0919 09:45:51.631320    3349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:45:51.640442    3349 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-120000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-120000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (31.295875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 node delete m03: exit status 89 (37.084959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-120000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-120000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr: exit status 7 (27.488542ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:51.814788    3363 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:51.814957    3363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:51.814960    3363 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:51.814963    3363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:51.815082    3363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:51.815211    3363 out.go:303] Setting JSON to false
	I0919 09:45:51.815222    3363 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:51.815294    3363 notify.go:220] Checking for updates...
	I0919 09:45:51.815417    3363 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:51.815422    3363 status.go:255] checking status of multinode-120000 ...
	I0919 09:45:51.815641    3363 status.go:330] multinode-120000 host status = "Stopped" (err=<nil>)
	I0919 09:45:51.815644    3363 status.go:343] host is not running, skipping remaining checks
	I0919 09:45:51.815646    3363 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.424375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status: exit status 7 (28.435708ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr: exit status 7 (27.014875ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:51.954865    3371 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:51.955035    3371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:51.955038    3371 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:51.955040    3371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:51.955159    3371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:51.955290    3371 out.go:303] Setting JSON to false
	I0919 09:45:51.955302    3371 mustload.go:65] Loading cluster: multinode-120000
	I0919 09:45:51.955359    3371 notify.go:220] Checking for updates...
	I0919 09:45:51.955503    3371 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:51.955507    3371 status.go:255] checking status of multinode-120000 ...
	I0919 09:45:51.955744    3371 status.go:330] multinode-120000 host status = "Stopped" (err=<nil>)
	I0919 09:45:51.955748    3371 status.go:343] host is not running, skipping remaining checks
	I0919 09:45:51.955750    3371 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr": multinode-120000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-120000 status --alsologtostderr": multinode-120000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (27.222959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E0919 09:45:52.768791    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:52.775217    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:52.787308    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:52.809410    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:52.851708    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:52.933939    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:53.096268    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:53.417169    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:54.059659    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:45:55.342143    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180154291s)

                                                
                                                
-- stdout --
	* [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-120000 in cluster multinode-120000
	* Restarting existing qemu2 VM for "multinode-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:45:52.008592    3375 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:45:52.008726    3375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:52.008728    3375 out.go:309] Setting ErrFile to fd 2...
	I0919 09:45:52.008731    3375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:45:52.008866    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:45:52.009809    3375 out.go:303] Setting JSON to false
	I0919 09:45:52.024882    3375 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":926,"bootTime":1695141026,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:45:52.024974    3375 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:45:52.028001    3375 out.go:177] * [multinode-120000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:45:52.030983    3375 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:45:52.034964    3375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:45:52.031046    3375 notify.go:220] Checking for updates...
	I0919 09:45:52.042961    3375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:45:52.050932    3375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:45:52.058989    3375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:45:52.066955    3375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:45:52.070324    3375 config.go:182] Loaded profile config "multinode-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:45:52.070579    3375 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:45:52.073966    3375 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:45:52.080979    3375 start.go:298] selected driver: qemu2
	I0919 09:45:52.080984    3375 start.go:902] validating driver "qemu2" against &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:45:52.081028    3375 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:45:52.083044    3375 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:45:52.083072    3375 cni.go:84] Creating CNI manager for ""
	I0919 09:45:52.083077    3375 cni.go:136] 1 nodes found, recommending kindnet
	I0919 09:45:52.083083    3375 start_flags.go:321] config:
	{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-120000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:45:52.087252    3375 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:45:52.093958    3375 out.go:177] * Starting control plane node multinode-120000 in cluster multinode-120000
	I0919 09:45:52.096830    3375 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:45:52.096846    3375 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:45:52.096856    3375 cache.go:57] Caching tarball of preloaded images
	I0919 09:45:52.096903    3375 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:45:52.096908    3375 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:45:52.096971    3375 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/multinode-120000/config.json ...
	I0919 09:45:52.097285    3375 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:45:52.097312    3375 start.go:369] acquired machines lock for "multinode-120000" in 20.75µs
	I0919 09:45:52.097321    3375 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:45:52.097325    3375 fix.go:54] fixHost starting: 
	I0919 09:45:52.097450    3375 fix.go:102] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	W0919 09:45:52.097458    3375 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:45:52.101976    3375 out.go:177] * Restarting existing qemu2 VM for "multinode-120000" ...
	I0919 09:45:52.108963    3375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c0:2e:d9:9a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:45:52.110836    3375 main.go:141] libmachine: STDOUT: 
	I0919 09:45:52.110856    3375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:45:52.110881    3375 fix.go:56] fixHost completed within 13.554959ms
	I0919 09:45:52.110886    3375 start.go:83] releasing machines lock for "multinode-120000", held for 13.570833ms
	W0919 09:45:52.110893    3375 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:45:52.110935    3375 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:45:52.110939    3375 start.go:703] Will try again in 5 seconds ...
	I0919 09:45:57.113057    3375 start.go:365] acquiring machines lock for multinode-120000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:45:57.113447    3375 start.go:369] acquired machines lock for "multinode-120000" in 295.583µs
	I0919 09:45:57.113596    3375 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:45:57.113616    3375 fix.go:54] fixHost starting: 
	I0919 09:45:57.114383    3375 fix.go:102] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	W0919 09:45:57.114412    3375 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:45:57.117928    3375 out.go:177] * Restarting existing qemu2 VM for "multinode-120000" ...
	I0919 09:45:57.122024    3375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:c0:2e:d9:9a:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/multinode-120000/disk.qcow2
	I0919 09:45:57.130841    3375 main.go:141] libmachine: STDOUT: 
	I0919 09:45:57.130890    3375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:45:57.131025    3375 fix.go:56] fixHost completed within 17.357416ms
	I0919 09:45:57.131046    3375 start.go:83] releasing machines lock for "multinode-120000", held for 17.57825ms
	W0919 09:45:57.131204    3375 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:45:57.137856    3375 out.go:177] 
	W0919 09:45:57.140942    3375 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:45:57.140985    3375 out.go:239] * 
	* 
	W0919 09:45:57.143301    3375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:45:57.151658    3375 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (65.901417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-120000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-120000-m01 --driver=qemu2 
E0919 09:45:57.904460    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
E0919 09:46:03.026928    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-120000-m01 --driver=qemu2 : exit status 80 (9.960095917s)

                                                
                                                
-- stdout --
	* [multinode-120000-m01] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-120000-m01 in cluster multinode-120000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-120000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-120000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-120000-m02 --driver=qemu2 
E0919 09:46:13.269373    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-120000-m02 --driver=qemu2 : exit status 80 (10.026969041s)

                                                
                                                
-- stdout --
	* [multinode-120000-m02] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-120000-m02 in cluster multinode-120000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-120000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-120000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-120000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-120000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-120000: exit status 89 (75.48325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-120000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-120000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-120000 -n multinode-120000: exit status 7 (28.033959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                    
x
+
TestPreload (9.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.777398333s)

                                                
                                                
-- stdout --
	* [test-preload-896000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-896000 in cluster test-preload-896000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-896000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:46:17.609290    3436 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:46:17.609423    3436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:17.609426    3436 out.go:309] Setting ErrFile to fd 2...
	I0919 09:46:17.609430    3436 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:46:17.609594    3436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:46:17.610699    3436 out.go:303] Setting JSON to false
	I0919 09:46:17.627068    3436 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":951,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:46:17.627160    3436 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:46:17.632966    3436 out.go:177] * [test-preload-896000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:46:17.640044    3436 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:46:17.640110    3436 notify.go:220] Checking for updates...
	I0919 09:46:17.643992    3436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:46:17.646973    3436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:46:17.649943    3436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:46:17.652896    3436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:46:17.659765    3436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:46:17.664224    3436 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:46:17.664270    3436 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:46:17.668930    3436 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:46:17.675930    3436 start.go:298] selected driver: qemu2
	I0919 09:46:17.675938    3436 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:46:17.675945    3436 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:46:17.678063    3436 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:46:17.680960    3436 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:46:17.683982    3436 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:46:17.684004    3436 cni.go:84] Creating CNI manager for ""
	I0919 09:46:17.684017    3436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:46:17.684026    3436 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:46:17.684032    3436 start_flags.go:321] config:
	{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:46:17.688382    3436 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.695989    3436 out.go:177] * Starting control plane node test-preload-896000 in cluster test-preload-896000
	I0919 09:46:17.699941    3436 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0919 09:46:17.700028    3436 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/test-preload-896000/config.json ...
	I0919 09:46:17.700033    3436 cache.go:107] acquiring lock: {Name:mkfaaef1ae9fdfa01368adf24b2ff1c2b3834997 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700043    3436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/test-preload-896000/config.json: {Name:mk413f6b8de46445a7c335a187b403db421aa812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:46:17.700042    3436 cache.go:107] acquiring lock: {Name:mk88feb4a54e132266e35f086178132a8a19f83d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700048    3436 cache.go:107] acquiring lock: {Name:mkc6f76fc4f80efc88ab4fef069872275559a306 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700216    3436 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:46:17.700296    3436 cache.go:107] acquiring lock: {Name:mka9e314508740d6c268715aec46221ecea3b2b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700332    3436 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 09:46:17.700362    3436 start.go:365] acquiring machines lock for test-preload-896000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:46:17.700332    3436 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 09:46:17.700375    3436 cache.go:107] acquiring lock: {Name:mkd115e7a25df2681fa96ead3e3efd7e48bf2df9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700395    3436 start.go:369] acquired machines lock for "test-preload-896000" in 27.083µs
	I0919 09:46:17.700360    3436 cache.go:107] acquiring lock: {Name:mka41d35a032e96fedc606651a1c49e3e0aece64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700409    3436 start.go:93] Provisioning new machine with config: &{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:46:17.700453    3436 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:46:17.700463    3436 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0919 09:46:17.700463    3436 cache.go:107] acquiring lock: {Name:mk8d4e75ff9f69f7d75e2dccc883ed618797c902 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700469    3436 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 09:46:17.703927    3436 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:46:17.700398    3436 cache.go:107] acquiring lock: {Name:mke1ac82d9a4a986033d938b1d8a1d14f87de6cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:46:17.700888    3436 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 09:46:17.700920    3436 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 09:46:17.704567    3436 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 09:46:17.710950    3436 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 09:46:17.711053    3436 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 09:46:17.711616    3436 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 09:46:17.711725    3436 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 09:46:17.711725    3436 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 09:46:17.715473    3436 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 09:46:17.715530    3436 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 09:46:17.715590    3436 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 09:46:17.720784    3436 start.go:159] libmachine.API.Create for "test-preload-896000" (driver="qemu2")
	I0919 09:46:17.720800    3436 client.go:168] LocalClient.Create starting
	I0919 09:46:17.720866    3436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:46:17.720911    3436 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:17.720926    3436 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:17.720967    3436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:46:17.721002    3436 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:17.721010    3436 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:17.721336    3436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:46:17.842791    3436 main.go:141] libmachine: Creating SSH key...
	I0919 09:46:17.878594    3436 main.go:141] libmachine: Creating Disk image...
	I0919 09:46:17.878604    3436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:46:17.878725    3436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:17.887269    3436 main.go:141] libmachine: STDOUT: 
	I0919 09:46:17.887329    3436 main.go:141] libmachine: STDERR: 
	I0919 09:46:17.887387    3436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2 +20000M
	I0919 09:46:17.895062    3436 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:46:17.895077    3436 main.go:141] libmachine: STDERR: 
	I0919 09:46:17.895099    3436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:17.895112    3436 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:46:17.895154    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:fa:84:b0:3f:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:17.896763    3436 main.go:141] libmachine: STDOUT: 
	I0919 09:46:17.896779    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:46:17.896798    3436 client.go:171] LocalClient.Create took 175.995375ms
	I0919 09:46:18.396579    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0919 09:46:18.477929    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0919 09:46:18.682791    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0919 09:46:18.823896    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0919 09:46:18.996061    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0919 09:46:18.996075    3436 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.295723458s
	I0919 09:46:18.996084    3436 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0919 09:46:19.288811    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0919 09:46:19.481704    3436 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0919 09:46:19.481763    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0919 09:46:19.540389    3436 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0919 09:46:19.540408    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0919 09:46:19.646150    3436 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0919 09:46:19.897094    3436 start.go:128] duration metric: createHost completed in 2.19665625s
	I0919 09:46:19.897154    3436 start.go:83] releasing machines lock for "test-preload-896000", held for 2.196787417s
	W0919 09:46:19.897209    3436 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:19.905727    3436 out.go:177] * Deleting "test-preload-896000" in qemu2 ...
	W0919 09:46:19.924849    3436 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:19.924887    3436 start.go:703] Will try again in 5 seconds ...
	I0919 09:46:19.978700    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 09:46:19.978742    3436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.278749584s
	I0919 09:46:19.978769    3436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 09:46:22.085226    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0919 09:46:22.085270    3436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.385019167s
	I0919 09:46:22.085305    3436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0919 09:46:22.166104    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0919 09:46:22.166131    3436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.465917167s
	I0919 09:46:22.166157    3436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0919 09:46:22.206173    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0919 09:46:22.206226    3436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.506265292s
	I0919 09:46:22.206254    3436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0919 09:46:24.052953    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0919 09:46:24.053002    3436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.352876334s
	I0919 09:46:24.053030    3436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0919 09:46:24.424384    3436 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0919 09:46:24.424430    3436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.724509291s
	I0919 09:46:24.424461    3436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0919 09:46:24.925128    3436 start.go:365] acquiring machines lock for test-preload-896000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:46:24.925911    3436 start.go:369] acquired machines lock for "test-preload-896000" in 712.75µs
	I0919 09:46:24.926159    3436 start.go:93] Provisioning new machine with config: &{Name:test-preload-896000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-896000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:46:24.926440    3436 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:46:24.932070    3436 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:46:24.977978    3436 start.go:159] libmachine.API.Create for "test-preload-896000" (driver="qemu2")
	I0919 09:46:24.978021    3436 client.go:168] LocalClient.Create starting
	I0919 09:46:24.978135    3436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:46:24.978190    3436 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:24.978213    3436 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:24.978294    3436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:46:24.978337    3436 main.go:141] libmachine: Decoding PEM data...
	I0919 09:46:24.978355    3436 main.go:141] libmachine: Parsing certificate...
	I0919 09:46:24.978811    3436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:46:25.109551    3436 main.go:141] libmachine: Creating SSH key...
	I0919 09:46:25.299656    3436 main.go:141] libmachine: Creating Disk image...
	I0919 09:46:25.299667    3436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:46:25.299848    3436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:25.308966    3436 main.go:141] libmachine: STDOUT: 
	I0919 09:46:25.309008    3436 main.go:141] libmachine: STDERR: 
	I0919 09:46:25.309064    3436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2 +20000M
	I0919 09:46:25.316392    3436 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:46:25.316407    3436 main.go:141] libmachine: STDERR: 
	I0919 09:46:25.316418    3436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:25.316426    3436 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:46:25.316468    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:94:89:4d:b4:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/test-preload-896000/disk.qcow2
	I0919 09:46:25.318104    3436 main.go:141] libmachine: STDOUT: 
	I0919 09:46:25.318125    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:46:25.318144    3436 client.go:171] LocalClient.Create took 340.12225ms
	I0919 09:46:27.319897    3436 start.go:128] duration metric: createHost completed in 2.3934705s
	I0919 09:46:27.319975    3436 start.go:83] releasing machines lock for "test-preload-896000", held for 2.394031583s
	W0919 09:46:27.320193    3436 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-896000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:46:27.329772    3436 out.go:177] 
	W0919 09:46:27.333806    3436 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:46:27.333885    3436 out.go:239] * 
	* 
	W0919 09:46:27.336962    3436 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:46:27.345728    3436 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-896000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-09-19 09:46:27.362999 -0700 PDT m=+769.818071376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-896000 -n test-preload-896000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-896000 -n test-preload-896000: exit status 7 (62.731375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-896000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-896000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-896000
--- FAIL: TestPreload (9.95s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-690000 --memory=2048 --driver=qemu2 
E0919 09:46:33.751883    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-690000 --memory=2048 --driver=qemu2 : exit status 80 (9.870936042s)

                                                
                                                
-- stdout --
	* [scheduled-stop-690000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-690000 in cluster scheduled-stop-690000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-690000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-690000 in cluster scheduled-stop-690000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-19 09:46:37.399255 -0700 PDT m=+779.854502293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-690000 -n scheduled-stop-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-690000 -n scheduled-stop-690000: exit status 7 (66.259417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-690000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (11.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe22859665 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-769000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-769000 --memory=2600 --driver=qemu2 : exit status 80 (9.76974975s)

                                                
                                                
-- stdout --
	* [skaffold-769000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-769000 in cluster skaffold-769000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-769000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-769000 in cluster skaffold-769000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-09-19 09:46:49.32664 -0700 PDT m=+791.782095126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-769000 -n skaffold-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-769000 -n skaffold-769000: exit status 7 (60.699167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-769000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-769000
--- FAIL: TestSkaffold (11.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (127s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-19 09:49:36.266995 -0700 PDT m=+958.725357085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-306000 -n running-upgrade-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-306000 -n running-upgrade-306000: exit status 85 (77.536834ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-306000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-306000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-306000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-306000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-306000\"")
helpers_test.go:175: Cleaning up "running-upgrade-306000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-306000
--- FAIL: TestRunningBinaryUpgrade (127.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.797031667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-917000 in cluster kubernetes-upgrade-917000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:49:36.611643    3944 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:49:36.611779    3944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:49:36.611782    3944 out.go:309] Setting ErrFile to fd 2...
	I0919 09:49:36.611784    3944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:49:36.611905    3944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:49:36.612923    3944 out.go:303] Setting JSON to false
	I0919 09:49:36.628026    3944 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1150,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:49:36.628123    3944 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:49:36.633239    3944 out.go:177] * [kubernetes-upgrade-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:49:36.640245    3944 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:49:36.644038    3944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:49:36.640306    3944 notify.go:220] Checking for updates...
	I0919 09:49:36.651174    3944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:49:36.652640    3944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:49:36.656190    3944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:49:36.659174    3944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:49:36.662636    3944 config.go:182] Loaded profile config "cert-expiration-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:49:36.662701    3944 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:49:36.662752    3944 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:49:36.667159    3944 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:49:36.674168    3944 start.go:298] selected driver: qemu2
	I0919 09:49:36.674175    3944 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:49:36.674181    3944 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:49:36.676251    3944 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:49:36.679247    3944 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:49:36.682264    3944 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:49:36.682289    3944 cni.go:84] Creating CNI manager for ""
	I0919 09:49:36.682298    3944 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:49:36.682308    3944 start_flags.go:321] config:
	{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:49:36.686667    3944 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:49:36.694145    3944 out.go:177] * Starting control plane node kubernetes-upgrade-917000 in cluster kubernetes-upgrade-917000
	I0919 09:49:36.698156    3944 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:49:36.698174    3944 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:49:36.698182    3944 cache.go:57] Caching tarball of preloaded images
	I0919 09:49:36.698236    3944 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:49:36.698241    3944 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 09:49:36.698304    3944 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kubernetes-upgrade-917000/config.json ...
	I0919 09:49:36.698321    3944 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kubernetes-upgrade-917000/config.json: {Name:mkfd991f9b5d70cbe3dccbfbb777d81be6342853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:49:36.698546    3944 start.go:365] acquiring machines lock for kubernetes-upgrade-917000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:49:36.698584    3944 start.go:369] acquired machines lock for "kubernetes-upgrade-917000" in 27.458µs
	I0919 09:49:36.698597    3944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:49:36.698632    3944 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:49:36.707195    3944 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:49:36.723433    3944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-917000" (driver="qemu2")
	I0919 09:49:36.723459    3944 client.go:168] LocalClient.Create starting
	I0919 09:49:36.723534    3944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:49:36.723563    3944 main.go:141] libmachine: Decoding PEM data...
	I0919 09:49:36.723576    3944 main.go:141] libmachine: Parsing certificate...
	I0919 09:49:36.723612    3944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:49:36.723631    3944 main.go:141] libmachine: Decoding PEM data...
	I0919 09:49:36.723638    3944 main.go:141] libmachine: Parsing certificate...
	I0919 09:49:36.723960    3944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:49:36.838605    3944 main.go:141] libmachine: Creating SSH key...
	I0919 09:49:36.937145    3944 main.go:141] libmachine: Creating Disk image...
	I0919 09:49:36.937156    3944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:49:36.937305    3944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:36.945921    3944 main.go:141] libmachine: STDOUT: 
	I0919 09:49:36.945947    3944 main.go:141] libmachine: STDERR: 
	I0919 09:49:36.946003    3944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2 +20000M
	I0919 09:49:36.953606    3944 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:49:36.953618    3944 main.go:141] libmachine: STDERR: 
	I0919 09:49:36.953642    3944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:36.953648    3944 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:49:36.953693    3944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f5:1c:0c:c3:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:36.955210    3944 main.go:141] libmachine: STDOUT: 
	I0919 09:49:36.955223    3944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:49:36.955243    3944 client.go:171] LocalClient.Create took 231.783417ms
	I0919 09:49:38.957380    3944 start.go:128] duration metric: createHost completed in 2.258768s
	I0919 09:49:38.957440    3944 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 2.258885958s
	W0919 09:49:38.957511    3944 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:49:38.964722    3944 out.go:177] * Deleting "kubernetes-upgrade-917000" in qemu2 ...
	W0919 09:49:38.986534    3944 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:49:38.986563    3944 start.go:703] Will try again in 5 seconds ...
	I0919 09:49:43.988690    3944 start.go:365] acquiring machines lock for kubernetes-upgrade-917000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:49:43.989154    3944 start.go:369] acquired machines lock for "kubernetes-upgrade-917000" in 354.875µs
	I0919 09:49:43.989289    3944 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:49:43.989560    3944 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:49:43.998161    3944 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:49:44.045921    3944 start.go:159] libmachine.API.Create for "kubernetes-upgrade-917000" (driver="qemu2")
	I0919 09:49:44.045975    3944 client.go:168] LocalClient.Create starting
	I0919 09:49:44.046084    3944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:49:44.046144    3944 main.go:141] libmachine: Decoding PEM data...
	I0919 09:49:44.046170    3944 main.go:141] libmachine: Parsing certificate...
	I0919 09:49:44.046241    3944 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:49:44.046275    3944 main.go:141] libmachine: Decoding PEM data...
	I0919 09:49:44.046309    3944 main.go:141] libmachine: Parsing certificate...
	I0919 09:49:44.046777    3944 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:49:44.174695    3944 main.go:141] libmachine: Creating SSH key...
	I0919 09:49:44.322004    3944 main.go:141] libmachine: Creating Disk image...
	I0919 09:49:44.322010    3944 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:49:44.322173    3944 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:44.331092    3944 main.go:141] libmachine: STDOUT: 
	I0919 09:49:44.331110    3944 main.go:141] libmachine: STDERR: 
	I0919 09:49:44.331176    3944 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2 +20000M
	I0919 09:49:44.338545    3944 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:49:44.338566    3944 main.go:141] libmachine: STDERR: 
	I0919 09:49:44.338579    3944 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:44.338586    3944 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:49:44.338626    3944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:30:7a:78:93:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:44.340179    3944 main.go:141] libmachine: STDOUT: 
	I0919 09:49:44.340194    3944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:49:44.340207    3944 client.go:171] LocalClient.Create took 294.232542ms
	I0919 09:49:46.342384    3944 start.go:128] duration metric: createHost completed in 2.352829542s
	I0919 09:49:46.342458    3944 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 2.353322s
	W0919 09:49:46.342882    3944 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:49:46.353649    3944 out.go:177] 
	W0919 09:49:46.357683    3944 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:49:46.357708    3944 out.go:239] * 
	* 
	W0919 09:49:46.360337    3944 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:49:46.370554    3944 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-917000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-917000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-917000 status --format={{.Host}}: exit status 7 (37.082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183335833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-917000 in cluster kubernetes-upgrade-917000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:49:46.547515    3966 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:49:46.547651    3966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:49:46.547654    3966 out.go:309] Setting ErrFile to fd 2...
	I0919 09:49:46.547657    3966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:49:46.547774    3966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:49:46.548759    3966 out.go:303] Setting JSON to false
	I0919 09:49:46.563812    3966 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1160,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:49:46.563889    3966 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:49:46.568502    3966 out.go:177] * [kubernetes-upgrade-917000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:49:46.575441    3966 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:49:46.579436    3966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:49:46.575501    3966 notify.go:220] Checking for updates...
	I0919 09:49:46.585424    3966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:49:46.591354    3966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:49:46.595389    3966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:49:46.598444    3966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:49:46.602636    3966 config.go:182] Loaded profile config "kubernetes-upgrade-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 09:49:46.602884    3966 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:49:46.607361    3966 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:49:46.614357    3966 start.go:298] selected driver: qemu2
	I0919 09:49:46.614365    3966 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-917000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:49:46.614427    3966 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:49:46.616524    3966 cni.go:84] Creating CNI manager for ""
	I0919 09:49:46.616541    3966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:49:46.616547    3966 start_flags.go:321] config:
	{Name:kubernetes-upgrade-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-917000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:49:46.620655    3966 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:49:46.631253    3966 out.go:177] * Starting control plane node kubernetes-upgrade-917000 in cluster kubernetes-upgrade-917000
	I0919 09:49:46.635372    3966 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:49:46.635389    3966 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:49:46.635401    3966 cache.go:57] Caching tarball of preloaded images
	I0919 09:49:46.635454    3966 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:49:46.635460    3966 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:49:46.635516    3966 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kubernetes-upgrade-917000/config.json ...
	I0919 09:49:46.635868    3966 start.go:365] acquiring machines lock for kubernetes-upgrade-917000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:49:46.635897    3966 start.go:369] acquired machines lock for "kubernetes-upgrade-917000" in 22.584µs
	I0919 09:49:46.635908    3966 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:49:46.635913    3966 fix.go:54] fixHost starting: 
	I0919 09:49:46.636059    3966 fix.go:102] recreateIfNeeded on kubernetes-upgrade-917000: state=Stopped err=<nil>
	W0919 09:49:46.636072    3966 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:49:46.641346    3966 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	I0919 09:49:46.645437    3966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:30:7a:78:93:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:46.647345    3966 main.go:141] libmachine: STDOUT: 
	I0919 09:49:46.647366    3966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:49:46.647395    3966 fix.go:56] fixHost completed within 11.482417ms
	I0919 09:49:46.647400    3966 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 11.499333ms
	W0919 09:49:46.647407    3966 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:49:46.647441    3966 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:49:46.647446    3966 start.go:703] Will try again in 5 seconds ...
	I0919 09:49:51.649746    3966 start.go:365] acquiring machines lock for kubernetes-upgrade-917000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:49:51.650093    3966 start.go:369] acquired machines lock for "kubernetes-upgrade-917000" in 241.75µs
	I0919 09:49:51.650215    3966 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:49:51.650238    3966 fix.go:54] fixHost starting: 
	I0919 09:49:51.650900    3966 fix.go:102] recreateIfNeeded on kubernetes-upgrade-917000: state=Stopped err=<nil>
	W0919 09:49:51.650926    3966 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:49:51.658453    3966 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-917000" ...
	I0919 09:49:51.661753    3966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:30:7a:78:93:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubernetes-upgrade-917000/disk.qcow2
	I0919 09:49:51.670285    3966 main.go:141] libmachine: STDOUT: 
	I0919 09:49:51.670355    3966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:49:51.670463    3966 fix.go:56] fixHost completed within 20.222333ms
	I0919 09:49:51.670495    3966 start.go:83] releasing machines lock for "kubernetes-upgrade-917000", held for 20.373875ms
	W0919 09:49:51.670735    3966 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:49:51.679294    3966 out.go:177] 
	W0919 09:49:51.683474    3966 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:49:51.683538    3966 out.go:239] * 
	* 
	W0919 09:49:51.686400    3966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:49:51.696472    3966 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-917000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-917000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-917000 version --output=json: exit status 1 (63.231ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-917000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-09-19 09:49:51.769943 -0700 PDT m=+974.228574543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-917000 -n kubernetes-upgrade-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-917000 -n kubernetes-upgrade-917000: exit status 7 (31.502583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-917000
--- FAIL: TestKubernetesUpgrade (15.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17240
- KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1739895896/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.05s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17240
- KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4035710352/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (145.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (145.81s)

                                                
                                    
x
+
TestPause/serial/Start (9.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-322000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-322000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.734437834s)

                                                
                                                
-- stdout --
	* [pause-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-322000 in cluster pause-322000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-322000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-322000 -n pause-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-322000 -n pause-322000: exit status 7 (68.895042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 : exit status 80 (9.718525542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-034000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-034000 in cluster NoKubernetes-034000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-034000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000: exit status 7 (65.344833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 
E0919 09:50:52.763540    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250160875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-034000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-034000
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-034000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000: exit status 7 (66.599625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240450834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-034000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-034000
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-034000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000: exit status 7 (65.926041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 : exit status 80 (5.223299208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-034000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-034000
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-034000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-034000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-034000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-034000 -n NoKubernetes-034000: exit status 7 (69.472333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.7797295s)

                                                
                                                
-- stdout --
	* [kindnet-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-826000 in cluster kindnet-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:51:05.939216    4111 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:51:05.939332    4111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:05.939334    4111 out.go:309] Setting ErrFile to fd 2...
	I0919 09:51:05.939337    4111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:05.939476    4111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:51:05.940492    4111 out.go:303] Setting JSON to false
	I0919 09:51:05.955898    4111 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1239,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:51:05.955984    4111 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:51:05.960883    4111 out.go:177] * [kindnet-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:51:05.971693    4111 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:51:05.967775    4111 notify.go:220] Checking for updates...
	I0919 09:51:05.979689    4111 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:51:05.986642    4111 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:51:05.993687    4111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:51:05.996652    4111 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:51:05.999657    4111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:51:06.003113    4111 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:51:06.003167    4111 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:51:06.007682    4111 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:51:06.014692    4111 start.go:298] selected driver: qemu2
	I0919 09:51:06.014698    4111 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:51:06.014703    4111 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:51:06.016851    4111 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:51:06.019619    4111 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:51:06.022778    4111 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:51:06.022804    4111 cni.go:84] Creating CNI manager for "kindnet"
	I0919 09:51:06.022816    4111 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 09:51:06.022821    4111 start_flags.go:321] config:
	{Name:kindnet-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:51:06.027172    4111 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:51:06.034713    4111 out.go:177] * Starting control plane node kindnet-826000 in cluster kindnet-826000
	I0919 09:51:06.038705    4111 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:51:06.038729    4111 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:51:06.038751    4111 cache.go:57] Caching tarball of preloaded images
	I0919 09:51:06.038824    4111 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:51:06.038830    4111 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:51:06.038895    4111 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kindnet-826000/config.json ...
	I0919 09:51:06.038907    4111 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kindnet-826000/config.json: {Name:mk0826c986ca6074393d97abe1ccbe1da060a836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:51:06.039119    4111 start.go:365] acquiring machines lock for kindnet-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:06.039151    4111 start.go:369] acquired machines lock for "kindnet-826000" in 25.625µs
	I0919 09:51:06.039164    4111 start.go:93] Provisioning new machine with config: &{Name:kindnet-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:06.039202    4111 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:06.047660    4111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:06.063852    4111 start.go:159] libmachine.API.Create for "kindnet-826000" (driver="qemu2")
	I0919 09:51:06.063883    4111 client.go:168] LocalClient.Create starting
	I0919 09:51:06.063944    4111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:06.063973    4111 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:06.063989    4111 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:06.064028    4111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:06.064048    4111 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:06.064056    4111 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:06.064425    4111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:06.180206    4111 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:06.267719    4111 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:06.267727    4111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:06.267864    4111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:06.276387    4111 main.go:141] libmachine: STDOUT: 
	I0919 09:51:06.276402    4111 main.go:141] libmachine: STDERR: 
	I0919 09:51:06.276466    4111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2 +20000M
	I0919 09:51:06.283745    4111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:06.283758    4111 main.go:141] libmachine: STDERR: 
	I0919 09:51:06.283779    4111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:06.283790    4111 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:06.283824    4111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:ec:6c:68:df:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:06.285292    4111 main.go:141] libmachine: STDOUT: 
	I0919 09:51:06.285312    4111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:06.285329    4111 client.go:171] LocalClient.Create took 221.444375ms
	I0919 09:51:08.287475    4111 start.go:128] duration metric: createHost completed in 2.248284125s
	I0919 09:51:08.287537    4111 start.go:83] releasing machines lock for "kindnet-826000", held for 2.248415041s
	W0919 09:51:08.287649    4111 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:08.295060    4111 out.go:177] * Deleting "kindnet-826000" in qemu2 ...
	W0919 09:51:08.315607    4111 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:08.315630    4111 start.go:703] Will try again in 5 seconds ...
	I0919 09:51:13.317732    4111 start.go:365] acquiring machines lock for kindnet-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:13.318125    4111 start.go:369] acquired machines lock for "kindnet-826000" in 314.75µs
	I0919 09:51:13.318230    4111 start.go:93] Provisioning new machine with config: &{Name:kindnet-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:13.318518    4111 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:13.324155    4111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:13.373211    4111 start.go:159] libmachine.API.Create for "kindnet-826000" (driver="qemu2")
	I0919 09:51:13.373243    4111 client.go:168] LocalClient.Create starting
	I0919 09:51:13.373357    4111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:13.373429    4111 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:13.373454    4111 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:13.373515    4111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:13.373555    4111 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:13.373571    4111 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:13.374099    4111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:13.538732    4111 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:13.635710    4111 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:13.635716    4111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:13.635858    4111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:13.644682    4111 main.go:141] libmachine: STDOUT: 
	I0919 09:51:13.644711    4111 main.go:141] libmachine: STDERR: 
	I0919 09:51:13.644773    4111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2 +20000M
	I0919 09:51:13.652105    4111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:13.652118    4111 main.go:141] libmachine: STDERR: 
	I0919 09:51:13.652131    4111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:13.652137    4111 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:13.652172    4111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ce:5e:45:01:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kindnet-826000/disk.qcow2
	I0919 09:51:13.653752    4111 main.go:141] libmachine: STDOUT: 
	I0919 09:51:13.653773    4111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:13.653788    4111 client.go:171] LocalClient.Create took 280.5445ms
	I0919 09:51:15.655973    4111 start.go:128] duration metric: createHost completed in 2.337455417s
	I0919 09:51:15.656067    4111 start.go:83] releasing machines lock for "kindnet-826000", held for 2.337958708s
	W0919 09:51:15.656568    4111 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:15.665157    4111 out.go:177] 
	W0919 09:51:15.669204    4111 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:51:15.669227    4111 out.go:239] * 
	* 
	W0919 09:51:15.671883    4111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:51:15.680249    4111 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0919 09:51:20.473507    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/ingress-addon-legacy-969000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.884380042s)

                                                
                                                
-- stdout --
	* [auto-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-826000 in cluster auto-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:51:17.838545    4229 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:51:17.838674    4229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:17.838677    4229 out.go:309] Setting ErrFile to fd 2...
	I0919 09:51:17.838680    4229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:17.838808    4229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:51:17.839835    4229 out.go:303] Setting JSON to false
	I0919 09:51:17.854849    4229 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1251,"bootTime":1695141026,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:51:17.854938    4229 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:51:17.860497    4229 out.go:177] * [auto-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:51:17.868479    4229 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:51:17.868530    4229 notify.go:220] Checking for updates...
	I0919 09:51:17.874793    4229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:51:17.878420    4229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:51:17.882412    4229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:51:17.885344    4229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:51:17.888411    4229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:51:17.891840    4229 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:51:17.891888    4229 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:51:17.895371    4229 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:51:17.902428    4229 start.go:298] selected driver: qemu2
	I0919 09:51:17.902436    4229 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:51:17.902445    4229 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:51:17.904507    4229 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:51:17.907364    4229 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:51:17.910570    4229 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:51:17.910592    4229 cni.go:84] Creating CNI manager for ""
	I0919 09:51:17.910601    4229 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:51:17.910605    4229 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:51:17.910611    4229 start_flags.go:321] config:
	{Name:auto-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0919 09:51:17.914759    4229 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:51:17.921418    4229 out.go:177] * Starting control plane node auto-826000 in cluster auto-826000
	I0919 09:51:17.925428    4229 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:51:17.925446    4229 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:51:17.925461    4229 cache.go:57] Caching tarball of preloaded images
	I0919 09:51:17.925511    4229 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:51:17.925517    4229 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:51:17.925577    4229 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/auto-826000/config.json ...
	I0919 09:51:17.925590    4229 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/auto-826000/config.json: {Name:mk76ca64a811e80a7f02ac511ca35c0a168fa7cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:51:17.925817    4229 start.go:365] acquiring machines lock for auto-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:17.925847    4229 start.go:369] acquired machines lock for "auto-826000" in 25.041µs
	I0919 09:51:17.925860    4229 start.go:93] Provisioning new machine with config: &{Name:auto-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:17.925894    4229 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:17.934417    4229 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:17.950780    4229 start.go:159] libmachine.API.Create for "auto-826000" (driver="qemu2")
	I0919 09:51:17.950807    4229 client.go:168] LocalClient.Create starting
	I0919 09:51:17.950874    4229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:17.950903    4229 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:17.950920    4229 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:17.950953    4229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:17.950972    4229 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:17.950983    4229 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:17.951290    4229 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:18.066673    4229 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:18.180078    4229 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:18.180090    4229 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:18.180238    4229 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:18.188700    4229 main.go:141] libmachine: STDOUT: 
	I0919 09:51:18.188715    4229 main.go:141] libmachine: STDERR: 
	I0919 09:51:18.188772    4229 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2 +20000M
	I0919 09:51:18.195982    4229 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:18.195994    4229 main.go:141] libmachine: STDERR: 
	I0919 09:51:18.196012    4229 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:18.196018    4229 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:18.196056    4229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:19:32:32:0e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:18.197624    4229 main.go:141] libmachine: STDOUT: 
	I0919 09:51:18.197636    4229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:18.197657    4229 client.go:171] LocalClient.Create took 246.848334ms
	I0919 09:51:20.199853    4229 start.go:128] duration metric: createHost completed in 2.273965084s
	I0919 09:51:20.199926    4229 start.go:83] releasing machines lock for "auto-826000", held for 2.274107917s
	W0919 09:51:20.199988    4229 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:20.207310    4229 out.go:177] * Deleting "auto-826000" in qemu2 ...
	W0919 09:51:20.227222    4229 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:20.227246    4229 start.go:703] Will try again in 5 seconds ...
	I0919 09:51:25.229339    4229 start.go:365] acquiring machines lock for auto-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:25.229901    4229 start.go:369] acquired machines lock for "auto-826000" in 448.917µs
	I0919 09:51:25.230074    4229 start.go:93] Provisioning new machine with config: &{Name:auto-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:25.230371    4229 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:25.240061    4229 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:25.288763    4229 start.go:159] libmachine.API.Create for "auto-826000" (driver="qemu2")
	I0919 09:51:25.288800    4229 client.go:168] LocalClient.Create starting
	I0919 09:51:25.288961    4229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:25.289011    4229 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:25.289041    4229 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:25.289113    4229 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:25.289149    4229 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:25.289161    4229 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:25.289690    4229 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:25.416559    4229 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:25.637092    4229 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:25.637105    4229 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:25.637238    4229 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:25.646000    4229 main.go:141] libmachine: STDOUT: 
	I0919 09:51:25.646017    4229 main.go:141] libmachine: STDERR: 
	I0919 09:51:25.646072    4229 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2 +20000M
	I0919 09:51:25.653296    4229 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:25.653318    4229 main.go:141] libmachine: STDERR: 
	I0919 09:51:25.653334    4229 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:25.653340    4229 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:25.653376    4229 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:95:0b:57:b0:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/auto-826000/disk.qcow2
	I0919 09:51:25.654958    4229 main.go:141] libmachine: STDOUT: 
	I0919 09:51:25.654971    4229 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:25.654986    4229 client.go:171] LocalClient.Create took 366.187167ms
	I0919 09:51:27.657132    4229 start.go:128] duration metric: createHost completed in 2.426772625s
	I0919 09:51:27.657198    4229 start.go:83] releasing machines lock for "auto-826000", held for 2.427284125s
	W0919 09:51:27.657642    4229 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:27.669295    4229 out.go:177] 
	W0919 09:51:27.673394    4229 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:51:27.673419    4229 out.go:239] * 
	* 
	W0919 09:51:27.676086    4229 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:51:27.683323    4229 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.70668925s)

                                                
                                                
-- stdout --
	* [flannel-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-826000 in cluster flannel-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:51:29.731033    4344 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:51:29.731154    4344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:29.731157    4344 out.go:309] Setting ErrFile to fd 2...
	I0919 09:51:29.731159    4344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:29.731301    4344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:51:29.732316    4344 out.go:303] Setting JSON to false
	I0919 09:51:29.747436    4344 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1263,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:51:29.747518    4344 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:51:29.752513    4344 out.go:177] * [flannel-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:51:29.760448    4344 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:51:29.760496    4344 notify.go:220] Checking for updates...
	I0919 09:51:29.767407    4344 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:51:29.771382    4344 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:51:29.774443    4344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:51:29.781399    4344 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:51:29.784455    4344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:51:29.787718    4344 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:51:29.787774    4344 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:51:29.792421    4344 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:51:29.799318    4344 start.go:298] selected driver: qemu2
	I0919 09:51:29.799325    4344 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:51:29.799332    4344 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:51:29.801557    4344 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:51:29.804348    4344 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:51:29.807535    4344 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:51:29.807569    4344 cni.go:84] Creating CNI manager for "flannel"
	I0919 09:51:29.807574    4344 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0919 09:51:29.807580    4344 start_flags.go:321] config:
	{Name:flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:51:29.811819    4344 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:51:29.814419    4344 out.go:177] * Starting control plane node flannel-826000 in cluster flannel-826000
	I0919 09:51:29.822267    4344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:51:29.822286    4344 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:51:29.822294    4344 cache.go:57] Caching tarball of preloaded images
	I0919 09:51:29.822358    4344 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:51:29.822364    4344 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:51:29.822448    4344 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/flannel-826000/config.json ...
	I0919 09:51:29.822461    4344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/flannel-826000/config.json: {Name:mk6ac5bd8042726c5f23610e34f89d179b941007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:51:29.822670    4344 start.go:365] acquiring machines lock for flannel-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:29.822701    4344 start.go:369] acquired machines lock for "flannel-826000" in 24.834µs
	I0919 09:51:29.822713    4344 start.go:93] Provisioning new machine with config: &{Name:flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:29.822750    4344 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:29.830372    4344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:29.847292    4344 start.go:159] libmachine.API.Create for "flannel-826000" (driver="qemu2")
	I0919 09:51:29.847318    4344 client.go:168] LocalClient.Create starting
	I0919 09:51:29.847376    4344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:29.847406    4344 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:29.847423    4344 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:29.847465    4344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:29.847485    4344 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:29.847500    4344 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:29.847874    4344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:29.962521    4344 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:30.023841    4344 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:30.023846    4344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:30.023974    4344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:30.032492    4344 main.go:141] libmachine: STDOUT: 
	I0919 09:51:30.032507    4344 main.go:141] libmachine: STDERR: 
	I0919 09:51:30.032557    4344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2 +20000M
	I0919 09:51:30.039705    4344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:30.039717    4344 main.go:141] libmachine: STDERR: 
	I0919 09:51:30.039735    4344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:30.039740    4344 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:30.039775    4344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7a:58:7b:38:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:30.041222    4344 main.go:141] libmachine: STDOUT: 
	I0919 09:51:30.041240    4344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:30.041257    4344 client.go:171] LocalClient.Create took 193.938292ms
	I0919 09:51:32.043442    4344 start.go:128] duration metric: createHost completed in 2.220694792s
	I0919 09:51:32.043532    4344 start.go:83] releasing machines lock for "flannel-826000", held for 2.22086s
	W0919 09:51:32.043595    4344 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:32.057668    4344 out.go:177] * Deleting "flannel-826000" in qemu2 ...
	W0919 09:51:32.077323    4344 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:32.077354    4344 start.go:703] Will try again in 5 seconds ...
	I0919 09:51:37.079526    4344 start.go:365] acquiring machines lock for flannel-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:37.080017    4344 start.go:369] acquired machines lock for "flannel-826000" in 363.458µs
	I0919 09:51:37.080151    4344 start.go:93] Provisioning new machine with config: &{Name:flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:37.080403    4344 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:37.089060    4344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:37.134690    4344 start.go:159] libmachine.API.Create for "flannel-826000" (driver="qemu2")
	I0919 09:51:37.134737    4344 client.go:168] LocalClient.Create starting
	I0919 09:51:37.134848    4344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:37.134923    4344 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:37.134940    4344 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:37.135005    4344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:37.135041    4344 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:37.135052    4344 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:37.135567    4344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:37.262321    4344 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:37.352748    4344 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:37.352755    4344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:37.352896    4344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:37.361311    4344 main.go:141] libmachine: STDOUT: 
	I0919 09:51:37.361327    4344 main.go:141] libmachine: STDERR: 
	I0919 09:51:37.361377    4344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2 +20000M
	I0919 09:51:37.368636    4344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:37.368649    4344 main.go:141] libmachine: STDERR: 
	I0919 09:51:37.368664    4344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:37.368672    4344 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:37.368710    4344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:22:1a:9d:86:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/flannel-826000/disk.qcow2
	I0919 09:51:37.370265    4344 main.go:141] libmachine: STDOUT: 
	I0919 09:51:37.370278    4344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:37.370291    4344 client.go:171] LocalClient.Create took 235.552958ms
	I0919 09:51:39.372479    4344 start.go:128] duration metric: createHost completed in 2.292074375s
	I0919 09:51:39.372549    4344 start.go:83] releasing machines lock for "flannel-826000", held for 2.292549458s
	W0919 09:51:39.372912    4344 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:39.381595    4344 out.go:177] 
	W0919 09:51:39.386833    4344 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:51:39.386873    4344 out.go:239] * 
	* 
	W0919 09:51:39.389642    4344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:51:39.397651    4344 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.783366083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-826000 in cluster enable-default-cni-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:51:41.664873    4464 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:51:41.665016    4464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:41.665019    4464 out.go:309] Setting ErrFile to fd 2...
	I0919 09:51:41.665021    4464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:41.665133    4464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:51:41.666167    4464 out.go:303] Setting JSON to false
	I0919 09:51:41.681411    4464 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1275,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:51:41.681485    4464 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:51:41.686697    4464 out.go:177] * [enable-default-cni-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:51:41.694639    4464 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:51:41.698648    4464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:51:41.694679    4464 notify.go:220] Checking for updates...
	I0919 09:51:41.704677    4464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:51:41.707644    4464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:51:41.710679    4464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:51:41.713652    4464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:51:41.716989    4464 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:51:41.717029    4464 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:51:41.721652    4464 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:51:41.728646    4464 start.go:298] selected driver: qemu2
	I0919 09:51:41.728652    4464 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:51:41.728658    4464 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:51:41.730553    4464 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:51:41.733671    4464 out.go:177] * Automatically selected the socket_vmnet network
	E0919 09:51:41.736712    4464 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0919 09:51:41.736747    4464 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:51:41.736778    4464 cni.go:84] Creating CNI manager for "bridge"
	I0919 09:51:41.736783    4464 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:51:41.736789    4464 start_flags.go:321] config:
	{Name:enable-default-cni-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:51:41.740904    4464 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:51:41.747638    4464 out.go:177] * Starting control plane node enable-default-cni-826000 in cluster enable-default-cni-826000
	I0919 09:51:41.751617    4464 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:51:41.751635    4464 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:51:41.751647    4464 cache.go:57] Caching tarball of preloaded images
	I0919 09:51:41.751701    4464 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:51:41.751707    4464 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:51:41.751779    4464 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/enable-default-cni-826000/config.json ...
	I0919 09:51:41.751792    4464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/enable-default-cni-826000/config.json: {Name:mk895e9801dfffed05d5b039f266b33d82dc3d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:51:41.752025    4464 start.go:365] acquiring machines lock for enable-default-cni-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:41.752059    4464 start.go:369] acquired machines lock for "enable-default-cni-826000" in 26.166µs
	I0919 09:51:41.752073    4464 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:41.752104    4464 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:41.756506    4464 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:41.772944    4464 start.go:159] libmachine.API.Create for "enable-default-cni-826000" (driver="qemu2")
	I0919 09:51:41.772971    4464 client.go:168] LocalClient.Create starting
	I0919 09:51:41.773031    4464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:41.773058    4464 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:41.773082    4464 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:41.773122    4464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:41.773141    4464 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:41.773147    4464 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:41.773496    4464 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:41.888678    4464 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:41.977324    4464 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:41.977333    4464 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:41.977469    4464 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:41.985957    4464 main.go:141] libmachine: STDOUT: 
	I0919 09:51:41.985973    4464 main.go:141] libmachine: STDERR: 
	I0919 09:51:41.986030    4464 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2 +20000M
	I0919 09:51:41.993277    4464 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:41.993300    4464 main.go:141] libmachine: STDERR: 
	I0919 09:51:41.993322    4464 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:41.993328    4464 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:41.993373    4464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0c:8f:10:6c:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:41.994864    4464 main.go:141] libmachine: STDOUT: 
	I0919 09:51:41.994878    4464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:41.994897    4464 client.go:171] LocalClient.Create took 221.922625ms
	I0919 09:51:43.997044    4464 start.go:128] duration metric: createHost completed in 2.244956833s
	I0919 09:51:43.997112    4464 start.go:83] releasing machines lock for "enable-default-cni-826000", held for 2.245082333s
	W0919 09:51:43.997175    4464 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:44.008256    4464 out.go:177] * Deleting "enable-default-cni-826000" in qemu2 ...
	W0919 09:51:44.029059    4464 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:44.029083    4464 start.go:703] Will try again in 5 seconds ...
	I0919 09:51:49.031178    4464 start.go:365] acquiring machines lock for enable-default-cni-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:49.031663    4464 start.go:369] acquired machines lock for "enable-default-cni-826000" in 367.834µs
	I0919 09:51:49.031812    4464 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:49.032098    4464 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:49.037785    4464 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:49.083259    4464 start.go:159] libmachine.API.Create for "enable-default-cni-826000" (driver="qemu2")
	I0919 09:51:49.083295    4464 client.go:168] LocalClient.Create starting
	I0919 09:51:49.083403    4464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:49.083449    4464 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:49.083469    4464 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:49.083541    4464 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:49.083577    4464 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:49.083591    4464 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:49.084116    4464 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:49.212090    4464 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:49.359479    4464 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:49.359485    4464 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:49.359632    4464 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:49.368323    4464 main.go:141] libmachine: STDOUT: 
	I0919 09:51:49.368336    4464 main.go:141] libmachine: STDERR: 
	I0919 09:51:49.368386    4464 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2 +20000M
	I0919 09:51:49.375605    4464 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:49.375618    4464 main.go:141] libmachine: STDERR: 
	I0919 09:51:49.375635    4464 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:49.375642    4464 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:49.375689    4464 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:75:01:32:d1:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/enable-default-cni-826000/disk.qcow2
	I0919 09:51:49.377305    4464 main.go:141] libmachine: STDOUT: 
	I0919 09:51:49.377319    4464 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:49.377331    4464 client.go:171] LocalClient.Create took 294.036292ms
	I0919 09:51:51.379465    4464 start.go:128] duration metric: createHost completed in 2.347377041s
	I0919 09:51:51.379533    4464 start.go:83] releasing machines lock for "enable-default-cni-826000", held for 2.347876833s
	W0919 09:51:51.379976    4464 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:51.391558    4464 out.go:177] 
	W0919 09:51:51.395641    4464 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:51:51.395667    4464 out.go:239] * 
	* 
	W0919 09:51:51.398360    4464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:51:51.408549    4464 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.759355542s)

                                                
                                                
-- stdout --
	* [bridge-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-826000 in cluster bridge-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:51:53.558458    4576 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:51:53.558598    4576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:53.558601    4576 out.go:309] Setting ErrFile to fd 2...
	I0919 09:51:53.558603    4576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:51:53.558726    4576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:51:53.559740    4576 out.go:303] Setting JSON to false
	I0919 09:51:53.574627    4576 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1287,"bootTime":1695141026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:51:53.574681    4576 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:51:53.578597    4576 out.go:177] * [bridge-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:51:53.586600    4576 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:51:53.590547    4576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:51:53.586675    4576 notify.go:220] Checking for updates...
	I0919 09:51:53.596568    4576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:51:53.599552    4576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:51:53.602540    4576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:51:53.605576    4576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:51:53.608838    4576 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:51:53.608885    4576 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:51:53.612506    4576 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:51:53.618436    4576 start.go:298] selected driver: qemu2
	I0919 09:51:53.618443    4576 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:51:53.618449    4576 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:51:53.620417    4576 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:51:53.623602    4576 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:51:53.626649    4576 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:51:53.626677    4576 cni.go:84] Creating CNI manager for "bridge"
	I0919 09:51:53.626681    4576 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:51:53.626686    4576 start_flags.go:321] config:
	{Name:bridge-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0919 09:51:53.631085    4576 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:51:53.638549    4576 out.go:177] * Starting control plane node bridge-826000 in cluster bridge-826000
	I0919 09:51:53.642588    4576 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:51:53.642607    4576 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:51:53.642620    4576 cache.go:57] Caching tarball of preloaded images
	I0919 09:51:53.642682    4576 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:51:53.642687    4576 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:51:53.642751    4576 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/bridge-826000/config.json ...
	I0919 09:51:53.642764    4576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/bridge-826000/config.json: {Name:mk5a4558e69e80b3a79d793b69c6882d52779887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:51:53.642971    4576 start.go:365] acquiring machines lock for bridge-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:51:53.643002    4576 start.go:369] acquired machines lock for "bridge-826000" in 24.042µs
	I0919 09:51:53.643014    4576 start.go:93] Provisioning new machine with config: &{Name:bridge-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:51:53.643041    4576 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:51:53.651607    4576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:51:53.667608    4576 start.go:159] libmachine.API.Create for "bridge-826000" (driver="qemu2")
	I0919 09:51:53.667636    4576 client.go:168] LocalClient.Create starting
	I0919 09:51:53.667693    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:51:53.667718    4576 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:53.667730    4576 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:53.667767    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:51:53.667786    4576 main.go:141] libmachine: Decoding PEM data...
	I0919 09:51:53.667798    4576 main.go:141] libmachine: Parsing certificate...
	I0919 09:51:53.668098    4576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:51:53.783937    4576 main.go:141] libmachine: Creating SSH key...
	I0919 09:51:53.947271    4576 main.go:141] libmachine: Creating Disk image...
	I0919 09:51:53.947280    4576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:51:53.947437    4576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:51:53.956299    4576 main.go:141] libmachine: STDOUT: 
	I0919 09:51:53.956314    4576 main.go:141] libmachine: STDERR: 
	I0919 09:51:53.956372    4576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2 +20000M
	I0919 09:51:53.963639    4576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:51:53.963658    4576 main.go:141] libmachine: STDERR: 
	I0919 09:51:53.963682    4576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:51:53.963687    4576 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:51:53.963721    4576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:65:d5:9b:40:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:51:53.965240    4576 main.go:141] libmachine: STDOUT: 
	I0919 09:51:53.965252    4576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:51:53.965271    4576 client.go:171] LocalClient.Create took 297.63425ms
	I0919 09:51:55.967438    4576 start.go:128] duration metric: createHost completed in 2.324404667s
	I0919 09:51:55.967530    4576 start.go:83] releasing machines lock for "bridge-826000", held for 2.3245575s
	W0919 09:51:55.967589    4576 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:55.979594    4576 out.go:177] * Deleting "bridge-826000" in qemu2 ...
	W0919 09:51:55.999331    4576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:51:55.999363    4576 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:01.001548    4576 start.go:365] acquiring machines lock for bridge-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:01.002026    4576 start.go:369] acquired machines lock for "bridge-826000" in 360.833µs
	I0919 09:52:01.002160    4576 start.go:93] Provisioning new machine with config: &{Name:bridge-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:01.002429    4576 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:01.012058    4576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:01.058781    4576 start.go:159] libmachine.API.Create for "bridge-826000" (driver="qemu2")
	I0919 09:52:01.058842    4576 client.go:168] LocalClient.Create starting
	I0919 09:52:01.058969    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:01.059040    4576 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:01.059059    4576 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:01.059127    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:01.059187    4576 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:01.059202    4576 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:01.059756    4576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:01.188065    4576 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:01.230576    4576 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:01.230582    4576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:01.230732    4576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:52:01.239202    4576 main.go:141] libmachine: STDOUT: 
	I0919 09:52:01.239215    4576 main.go:141] libmachine: STDERR: 
	I0919 09:52:01.239272    4576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2 +20000M
	I0919 09:52:01.246417    4576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:01.246428    4576 main.go:141] libmachine: STDERR: 
	I0919 09:52:01.246441    4576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:52:01.246449    4576 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:01.246481    4576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:ce:91:4b:2a:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/bridge-826000/disk.qcow2
	I0919 09:52:01.248049    4576 main.go:141] libmachine: STDOUT: 
	I0919 09:52:01.248064    4576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:01.248077    4576 client.go:171] LocalClient.Create took 189.233667ms
	I0919 09:52:03.250267    4576 start.go:128] duration metric: createHost completed in 2.247843833s
	I0919 09:52:03.250330    4576 start.go:83] releasing machines lock for "bridge-826000", held for 2.248319208s
	W0919 09:52:03.250803    4576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:03.261561    4576 out.go:177] 
	W0919 09:52:03.265543    4576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:03.265566    4576 out.go:239] * 
	* 
	W0919 09:52:03.268366    4576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:03.277368    4576 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.859179542s)

                                                
                                                
-- stdout --
	* [kubenet-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-826000 in cluster kubenet-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:05.398789    4691 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:05.398904    4691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:05.398907    4691 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:05.398909    4691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:05.399021    4691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:05.400015    4691 out.go:303] Setting JSON to false
	I0919 09:52:05.415209    4691 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1299,"bootTime":1695141026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:05.415271    4691 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:05.419422    4691 out.go:177] * [kubenet-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:05.428314    4691 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:05.428379    4691 notify.go:220] Checking for updates...
	I0919 09:52:05.435302    4691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:05.438317    4691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:05.441322    4691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:05.444345    4691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:05.447254    4691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:05.450726    4691 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:05.450777    4691 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:05.455295    4691 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:05.462322    4691 start.go:298] selected driver: qemu2
	I0919 09:52:05.462331    4691 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:05.462337    4691 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:05.464398    4691 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:05.472385    4691 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:05.475309    4691 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:05.475345    4691 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0919 09:52:05.475351    4691 start_flags.go:321] config:
	{Name:kubenet-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:05.479710    4691 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:05.487159    4691 out.go:177] * Starting control plane node kubenet-826000 in cluster kubenet-826000
	I0919 09:52:05.491369    4691 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:05.491389    4691 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:05.491402    4691 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:05.491487    4691 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:05.491493    4691 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:05.491560    4691 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kubenet-826000/config.json ...
	I0919 09:52:05.491577    4691 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/kubenet-826000/config.json: {Name:mkd2fb77bbd7edc5baaa421fae9f5f6e840eb829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:05.491803    4691 start.go:365] acquiring machines lock for kubenet-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:05.491834    4691 start.go:369] acquired machines lock for "kubenet-826000" in 25.334µs
	I0919 09:52:05.491847    4691 start.go:93] Provisioning new machine with config: &{Name:kubenet-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:05.491879    4691 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:05.496305    4691 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:05.512931    4691 start.go:159] libmachine.API.Create for "kubenet-826000" (driver="qemu2")
	I0919 09:52:05.512966    4691 client.go:168] LocalClient.Create starting
	I0919 09:52:05.513021    4691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:05.513048    4691 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:05.513061    4691 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:05.513102    4691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:05.513122    4691 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:05.513131    4691 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:05.513441    4691 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:05.629454    4691 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:05.870875    4691 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:05.870887    4691 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:05.871050    4691 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:05.879951    4691 main.go:141] libmachine: STDOUT: 
	I0919 09:52:05.879966    4691 main.go:141] libmachine: STDERR: 
	I0919 09:52:05.880018    4691 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2 +20000M
	I0919 09:52:05.887352    4691 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:05.887377    4691 main.go:141] libmachine: STDERR: 
	I0919 09:52:05.887401    4691 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:05.887406    4691 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:05.887442    4691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cb:69:95:63:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:05.889040    4691 main.go:141] libmachine: STDOUT: 
	I0919 09:52:05.889053    4691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:05.889075    4691 client.go:171] LocalClient.Create took 376.1105ms
	I0919 09:52:07.891273    4691 start.go:128] duration metric: createHost completed in 2.399410583s
	I0919 09:52:07.891334    4691 start.go:83] releasing machines lock for "kubenet-826000", held for 2.399531667s
	W0919 09:52:07.891415    4691 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:07.899823    4691 out.go:177] * Deleting "kubenet-826000" in qemu2 ...
	W0919 09:52:07.924686    4691 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:07.924723    4691 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:12.926389    4691 start.go:365] acquiring machines lock for kubenet-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:12.926966    4691 start.go:369] acquired machines lock for "kubenet-826000" in 450.75µs
	I0919 09:52:12.927100    4691 start.go:93] Provisioning new machine with config: &{Name:kubenet-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:12.927366    4691 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:12.936004    4691 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:12.981597    4691 start.go:159] libmachine.API.Create for "kubenet-826000" (driver="qemu2")
	I0919 09:52:12.981637    4691 client.go:168] LocalClient.Create starting
	I0919 09:52:12.981744    4691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:12.981796    4691 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:12.981816    4691 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:12.981888    4691 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:12.981924    4691 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:12.981934    4691 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:12.982494    4691 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:13.110698    4691 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:13.175201    4691 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:13.175208    4691 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:13.175341    4691 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:13.183826    4691 main.go:141] libmachine: STDOUT: 
	I0919 09:52:13.183842    4691 main.go:141] libmachine: STDERR: 
	I0919 09:52:13.183917    4691 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2 +20000M
	I0919 09:52:13.191096    4691 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:13.191108    4691 main.go:141] libmachine: STDERR: 
	I0919 09:52:13.191123    4691 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:13.191130    4691 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:13.191172    4691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:f7:5c:ce:7f:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/kubenet-826000/disk.qcow2
	I0919 09:52:13.192701    4691 main.go:141] libmachine: STDOUT: 
	I0919 09:52:13.192714    4691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:13.192726    4691 client.go:171] LocalClient.Create took 211.088292ms
	I0919 09:52:15.194913    4691 start.go:128] duration metric: createHost completed in 2.267538334s
	I0919 09:52:15.195005    4691 start.go:83] releasing machines lock for "kubenet-826000", held for 2.26805425s
	W0919 09:52:15.195511    4691 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:15.205117    4691 out.go:177] 
	W0919 09:52:15.209233    4691 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:15.209296    4691 out.go:239] * 
	* 
	W0919 09:52:15.212032    4691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:15.218174    4691 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.707245666s)

                                                
                                                
-- stdout --
	* [custom-flannel-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-826000 in cluster custom-flannel-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:17.318930    4809 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:17.319064    4809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:17.319066    4809 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:17.319069    4809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:17.319197    4809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:17.320236    4809 out.go:303] Setting JSON to false
	I0919 09:52:17.335415    4809 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1311,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:17.335498    4809 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:17.340671    4809 out.go:177] * [custom-flannel-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:17.351529    4809 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:17.347765    4809 notify.go:220] Checking for updates...
	I0919 09:52:17.357641    4809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:17.363652    4809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:17.366764    4809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:17.369611    4809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:17.373633    4809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:17.376995    4809 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:17.377044    4809 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:17.380598    4809 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:17.387635    4809 start.go:298] selected driver: qemu2
	I0919 09:52:17.387641    4809 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:17.387646    4809 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:17.389853    4809 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:17.392553    4809 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:17.396707    4809 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:17.396731    4809 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0919 09:52:17.396744    4809 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0919 09:52:17.396749    4809 start_flags.go:321] config:
	{Name:custom-flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:17.400952    4809 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:17.403759    4809 out.go:177] * Starting control plane node custom-flannel-826000 in cluster custom-flannel-826000
	I0919 09:52:17.411585    4809 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:17.411601    4809 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:17.411610    4809 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:17.411674    4809 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:17.411688    4809 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:17.411761    4809 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/custom-flannel-826000/config.json ...
	I0919 09:52:17.411778    4809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/custom-flannel-826000/config.json: {Name:mk6b1728a3dc6b62a38893810b1c66cafbac8c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:17.411993    4809 start.go:365] acquiring machines lock for custom-flannel-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:17.412027    4809 start.go:369] acquired machines lock for "custom-flannel-826000" in 25.333µs
	I0919 09:52:17.412040    4809 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:17.412082    4809 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:17.420633    4809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:17.436814    4809 start.go:159] libmachine.API.Create for "custom-flannel-826000" (driver="qemu2")
	I0919 09:52:17.436840    4809 client.go:168] LocalClient.Create starting
	I0919 09:52:17.436911    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:17.436939    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:17.436950    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:17.436993    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:17.437012    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:17.437018    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:17.437359    4809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:17.554022    4809 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:17.652065    4809 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:17.652073    4809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:17.652216    4809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.660879    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:17.660901    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:17.660953    4809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2 +20000M
	I0919 09:52:17.668232    4809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:17.668248    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:17.668264    4809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.668271    4809 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:17.668312    4809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3d:b2:5d:7c:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.669861    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:17.669878    4809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:17.669894    4809 client.go:171] LocalClient.Create took 233.052166ms
	I0919 09:52:19.672041    4809 start.go:128] duration metric: createHost completed in 2.259974833s
	I0919 09:52:19.672102    4809 start.go:83] releasing machines lock for "custom-flannel-826000", held for 2.260104583s
	W0919 09:52:19.672172    4809 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:19.679385    4809 out.go:177] * Deleting "custom-flannel-826000" in qemu2 ...
	W0919 09:52:19.701527    4809 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:19.701556    4809 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:24.702475    4809 start.go:365] acquiring machines lock for custom-flannel-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:24.702784    4809 start.go:369] acquired machines lock for "custom-flannel-826000" in 248.542µs
	I0919 09:52:24.702874    4809 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:24.703108    4809 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:24.712549    4809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:24.752409    4809 start.go:159] libmachine.API.Create for "custom-flannel-826000" (driver="qemu2")
	I0919 09:52:24.752457    4809 client.go:168] LocalClient.Create starting
	I0919 09:52:24.752584    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:24.752651    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:24.752682    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:24.752755    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:24.752796    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:24.752814    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:24.753356    4809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:24.886960    4809 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:24.941010    4809 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:24.941015    4809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:24.941146    4809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:24.949763    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:24.949777    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:24.949825    4809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2 +20000M
	I0919 09:52:24.957017    4809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:24.957028    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:24.957041    4809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:24.957046    4809 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:24.957093    4809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:b4:ac:29:3c:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:24.958650    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:24.958662    4809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:24.958676    4809 client.go:171] LocalClient.Create took 206.216583ms
	I0919 09:52:26.960814    4809 start.go:128] duration metric: createHost completed in 2.257723292s
	I0919 09:52:26.960877    4809 start.go:83] releasing machines lock for "custom-flannel-826000", held for 2.258117375s
	W0919 09:52:26.961280    4809 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:26.970972    4809 out.go:177] 
	W0919 09:52:26.975010    4809 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:26.975052    4809 out.go:239] * 
	* 
	W0919 09:52:26.977850    4809 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:26.985931    4809 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe: permission denied (1.110666ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe: permission denied (4.863917ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe start -p stopped-upgrade-282000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe: permission denied (5.10125ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3377450776.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-282000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-282000: exit status 85 (116.35ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo docker                         | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo cat                            | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo                                | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo find                           | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p bridge-826000 sudo crio                           | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p bridge-826000                                     | bridge-826000         | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT | 19 Sep 23 09:52 PDT |
	| start   | -p kubenet-826000                                    | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | --memory=3072                                        |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                       |         |         |                     |                     |
	|         | --driver=qemu2                                       |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo crictl                        | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo crictl                        | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo find                          | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo ip a s                        | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	| ssh     | -p kubenet-826000 sudo ip r s                        | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo docker                        | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo cat                           | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo                               | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo find                          | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kubenet-826000 sudo crio                          | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kubenet-826000                                    | kubenet-826000        | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT | 19 Sep 23 09:52 PDT |
	| start   | -p custom-flannel-826000                             | custom-flannel-826000 | jenkins | v1.31.2 | 19 Sep 23 09:52 PDT |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=qemu2                                       |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:52:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:52:17.318930    4809 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:17.319064    4809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:17.319066    4809 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:17.319069    4809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:17.319197    4809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:17.320236    4809 out.go:303] Setting JSON to false
	I0919 09:52:17.335415    4809 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1311,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:17.335498    4809 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:17.340671    4809 out.go:177] * [custom-flannel-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:17.351529    4809 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:17.347765    4809 notify.go:220] Checking for updates...
	I0919 09:52:17.357641    4809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:17.363652    4809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:17.366764    4809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:17.369611    4809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:17.373633    4809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:17.376995    4809 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:17.377044    4809 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:17.380598    4809 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:17.387635    4809 start.go:298] selected driver: qemu2
	I0919 09:52:17.387641    4809 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:17.387646    4809 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:17.389853    4809 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:17.392553    4809 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:17.396707    4809 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:17.396731    4809 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0919 09:52:17.396744    4809 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0919 09:52:17.396749    4809 start_flags.go:321] config:
	{Name:custom-flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:17.400952    4809 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:17.403759    4809 out.go:177] * Starting control plane node custom-flannel-826000 in cluster custom-flannel-826000
	I0919 09:52:17.411585    4809 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:17.411601    4809 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:17.411610    4809 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:17.411674    4809 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:17.411688    4809 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:17.411761    4809 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/custom-flannel-826000/config.json ...
	I0919 09:52:17.411778    4809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/custom-flannel-826000/config.json: {Name:mk6b1728a3dc6b62a38893810b1c66cafbac8c03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:17.411993    4809 start.go:365] acquiring machines lock for custom-flannel-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:17.412027    4809 start.go:369] acquired machines lock for "custom-flannel-826000" in 25.333µs
	I0919 09:52:17.412040    4809 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:17.412082    4809 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:17.420633    4809 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:17.436814    4809 start.go:159] libmachine.API.Create for "custom-flannel-826000" (driver="qemu2")
	I0919 09:52:17.436840    4809 client.go:168] LocalClient.Create starting
	I0919 09:52:17.436911    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:17.436939    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:17.436950    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:17.436993    4809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:17.437012    4809 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:17.437018    4809 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:17.437359    4809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:17.554022    4809 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:17.652065    4809 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:17.652073    4809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:17.652216    4809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.660879    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:17.660901    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:17.660953    4809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2 +20000M
	I0919 09:52:17.668232    4809 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:17.668248    4809 main.go:141] libmachine: STDERR: 
	I0919 09:52:17.668264    4809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.668271    4809 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:17.668312    4809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3d:b2:5d:7c:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/custom-flannel-826000/disk.qcow2
	I0919 09:52:17.669861    4809 main.go:141] libmachine: STDOUT: 
	I0919 09:52:17.669878    4809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:17.669894    4809 client.go:171] LocalClient.Create took 233.052166ms
	I0919 09:52:19.672041    4809 start.go:128] duration metric: createHost completed in 2.259974833s
	I0919 09:52:19.672102    4809 start.go:83] releasing machines lock for "custom-flannel-826000", held for 2.260104583s
	W0919 09:52:19.672172    4809 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:19.679385    4809 out.go:177] * Deleting "custom-flannel-826000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-282000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-282000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.751838417s)

                                                
                                                
-- stdout --
	* [calico-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-826000 in cluster calico-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:20.330242    4838 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:20.330376    4838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:20.330379    4838 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:20.330382    4838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:20.330509    4838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:20.331548    4838 out.go:303] Setting JSON to false
	I0919 09:52:20.346611    4838 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1314,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:20.346696    4838 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:20.351147    4838 out.go:177] * [calico-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:20.357238    4838 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:20.361213    4838 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:20.357300    4838 notify.go:220] Checking for updates...
	I0919 09:52:20.369180    4838 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:20.372273    4838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:20.375290    4838 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:20.378172    4838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:20.381603    4838 config.go:182] Loaded profile config "custom-flannel-826000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:20.381663    4838 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:20.381707    4838 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:20.386202    4838 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:20.393190    4838 start.go:298] selected driver: qemu2
	I0919 09:52:20.393197    4838 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:20.393204    4838 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:20.395259    4838 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:20.398203    4838 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:20.401188    4838 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:20.401206    4838 cni.go:84] Creating CNI manager for "calico"
	I0919 09:52:20.401210    4838 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0919 09:52:20.401216    4838 start_flags.go:321] config:
	{Name:calico-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:calico-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:20.405442    4838 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:20.408258    4838 out.go:177] * Starting control plane node calico-826000 in cluster calico-826000
	I0919 09:52:20.415222    4838 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:20.415240    4838 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:20.415248    4838 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:20.415311    4838 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:20.415317    4838 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:20.415380    4838 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/calico-826000/config.json ...
	I0919 09:52:20.415393    4838 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/calico-826000/config.json: {Name:mk6d31e532711f8fe2f31ae43f39135c0084b885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:20.415607    4838 start.go:365] acquiring machines lock for calico-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:20.415636    4838 start.go:369] acquired machines lock for "calico-826000" in 23.125µs
	I0919 09:52:20.415648    4838 start.go:93] Provisioning new machine with config: &{Name:calico-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:20.415676    4838 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:20.420213    4838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:20.435925    4838 start.go:159] libmachine.API.Create for "calico-826000" (driver="qemu2")
	I0919 09:52:20.435950    4838 client.go:168] LocalClient.Create starting
	I0919 09:52:20.436033    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:20.436061    4838 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:20.436076    4838 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:20.436111    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:20.436130    4838 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:20.436139    4838 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:20.436453    4838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:20.552034    4838 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:20.593755    4838 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:20.593761    4838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:20.593892    4838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:20.602286    4838 main.go:141] libmachine: STDOUT: 
	I0919 09:52:20.602298    4838 main.go:141] libmachine: STDERR: 
	I0919 09:52:20.602352    4838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2 +20000M
	I0919 09:52:20.609400    4838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:20.609411    4838 main.go:141] libmachine: STDERR: 
	I0919 09:52:20.609432    4838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:20.609440    4838 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:20.609470    4838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:cb:b6:55:37:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:20.610945    4838 main.go:141] libmachine: STDOUT: 
	I0919 09:52:20.610956    4838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:20.610974    4838 client.go:171] LocalClient.Create took 175.022625ms
	I0919 09:52:22.613148    4838 start.go:128] duration metric: createHost completed in 2.1974915s
	I0919 09:52:22.613210    4838 start.go:83] releasing machines lock for "calico-826000", held for 2.197602875s
	W0919 09:52:22.613268    4838 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:22.623323    4838 out.go:177] * Deleting "calico-826000" in qemu2 ...
	W0919 09:52:22.644231    4838 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:22.644271    4838 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:27.646277    4838 start.go:365] acquiring machines lock for calico-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:27.646359    4838 start.go:369] acquired machines lock for "calico-826000" in 64.792µs
	I0919 09:52:27.646384    4838 start.go:93] Provisioning new machine with config: &{Name:calico-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:27.646425    4838 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:27.655012    4838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:27.669460    4838 start.go:159] libmachine.API.Create for "calico-826000" (driver="qemu2")
	I0919 09:52:27.669484    4838 client.go:168] LocalClient.Create starting
	I0919 09:52:27.669540    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:27.669572    4838 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:27.669580    4838 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:27.669622    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:27.669637    4838 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:27.669646    4838 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:27.669891    4838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:27.827980    4838 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:27.998496    4838 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:27.998512    4838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:27.998681    4838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:28.007715    4838 main.go:141] libmachine: STDOUT: 
	I0919 09:52:28.007738    4838 main.go:141] libmachine: STDERR: 
	I0919 09:52:28.007826    4838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2 +20000M
	I0919 09:52:28.015706    4838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:28.015735    4838 main.go:141] libmachine: STDERR: 
	I0919 09:52:28.015759    4838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:28.015766    4838 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:28.015814    4838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:81:44:b6:0f:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/calico-826000/disk.qcow2
	I0919 09:52:28.017566    4838 main.go:141] libmachine: STDOUT: 
	I0919 09:52:28.017580    4838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:28.017595    4838 client.go:171] LocalClient.Create took 348.113083ms
	I0919 09:52:30.019758    4838 start.go:128] duration metric: createHost completed in 2.373349042s
	I0919 09:52:30.019834    4838 start.go:83] releasing machines lock for "calico-826000", held for 2.373504916s
	W0919 09:52:30.020232    4838 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:30.032815    4838 out.go:177] 
	W0919 09:52:30.035774    4838 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:30.035813    4838 out.go:239] * 
	* 
	W0919 09:52:30.038261    4838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:30.046684    4838 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-826000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.302445208s)

                                                
                                                
-- stdout --
	* [false-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-826000 in cluster false-826000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:29.310661    4960 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:29.310800    4960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:29.310802    4960 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:29.310805    4960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:29.310932    4960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:29.311940    4960 out.go:303] Setting JSON to false
	I0919 09:52:29.326966    4960 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1323,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:29.327029    4960 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:29.332062    4960 out.go:177] * [false-826000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:29.339047    4960 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:29.343090    4960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:29.339130    4960 notify.go:220] Checking for updates...
	I0919 09:52:29.349083    4960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:29.353105    4960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:29.356077    4960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:29.359101    4960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:29.362475    4960 config.go:182] Loaded profile config "calico-826000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:29.362549    4960 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:29.362598    4960 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:29.367115    4960 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:29.373984    4960 start.go:298] selected driver: qemu2
	I0919 09:52:29.373992    4960 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:29.373997    4960 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:29.376102    4960 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:29.379072    4960 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:29.382217    4960 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:29.382248    4960 cni.go:84] Creating CNI manager for "false"
	I0919 09:52:29.382260    4960 start_flags.go:321] config:
	{Name:false-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:false-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0919 09:52:29.386387    4960 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:29.394130    4960 out.go:177] * Starting control plane node false-826000 in cluster false-826000
	I0919 09:52:29.398065    4960 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:29.398084    4960 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:29.398097    4960 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:29.398165    4960 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:29.398178    4960 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:29.398246    4960 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/false-826000/config.json ...
	I0919 09:52:29.398258    4960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/false-826000/config.json: {Name:mka5a026e87ee70787a5e9e3ca59e3e283bcc635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:29.398463    4960 start.go:365] acquiring machines lock for false-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:30.019953    4960 start.go:369] acquired machines lock for "false-826000" in 621.479417ms
	I0919 09:52:30.020128    4960 start.go:93] Provisioning new machine with config: &{Name:false-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:30.020445    4960 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:30.029759    4960 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:30.075282    4960 start.go:159] libmachine.API.Create for "false-826000" (driver="qemu2")
	I0919 09:52:30.075349    4960 client.go:168] LocalClient.Create starting
	I0919 09:52:30.075481    4960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:30.075530    4960 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:30.075550    4960 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:30.075627    4960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:30.075668    4960 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:30.075686    4960 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:30.076283    4960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:30.204644    4960 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:30.230223    4960 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:30.230234    4960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:30.230393    4960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:30.239155    4960 main.go:141] libmachine: STDOUT: 
	I0919 09:52:30.239171    4960 main.go:141] libmachine: STDERR: 
	I0919 09:52:30.239240    4960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2 +20000M
	I0919 09:52:30.247394    4960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:30.247409    4960 main.go:141] libmachine: STDERR: 
	I0919 09:52:30.247427    4960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:30.247435    4960 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:30.247469    4960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:44:90:e8:14:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:30.249277    4960 main.go:141] libmachine: STDOUT: 
	I0919 09:52:30.249288    4960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:30.249307    4960 client.go:171] LocalClient.Create took 173.949791ms
	I0919 09:52:32.249432    4960 start.go:128] duration metric: createHost completed in 2.229010167s
	I0919 09:52:32.249452    4960 start.go:83] releasing machines lock for "false-826000", held for 2.229513209s
	W0919 09:52:32.249462    4960 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:32.254656    4960 out.go:177] * Deleting "false-826000" in qemu2 ...
	W0919 09:52:32.262350    4960 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:32.262359    4960 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:37.264504    4960 start.go:365] acquiring machines lock for false-826000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:37.265048    4960 start.go:369] acquired machines lock for "false-826000" in 420.5µs
	I0919 09:52:37.265234    4960 start.go:93] Provisioning new machine with config: &{Name:false-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-826000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:37.265491    4960 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:37.275116    4960 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 09:52:37.323025    4960 start.go:159] libmachine.API.Create for "false-826000" (driver="qemu2")
	I0919 09:52:37.323092    4960 client.go:168] LocalClient.Create starting
	I0919 09:52:37.323239    4960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:37.323300    4960 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:37.323324    4960 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:37.323379    4960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:37.323413    4960 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:37.323424    4960 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:37.323898    4960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:37.456323    4960 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:37.525503    4960 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:37.525511    4960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:37.525641    4960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:37.534174    4960 main.go:141] libmachine: STDOUT: 
	I0919 09:52:37.534194    4960 main.go:141] libmachine: STDERR: 
	I0919 09:52:37.534261    4960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2 +20000M
	I0919 09:52:37.541839    4960 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:37.541851    4960 main.go:141] libmachine: STDERR: 
	I0919 09:52:37.541868    4960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:37.541876    4960 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:37.541918    4960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:f2:6c:45:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/false-826000/disk.qcow2
	I0919 09:52:37.543462    4960 main.go:141] libmachine: STDOUT: 
	I0919 09:52:37.543474    4960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:37.543489    4960 client.go:171] LocalClient.Create took 220.3945ms
	I0919 09:52:39.545643    4960 start.go:128] duration metric: createHost completed in 2.280154709s
	I0919 09:52:39.545739    4960 start.go:83] releasing machines lock for "false-826000", held for 2.280697083s
	W0919 09:52:39.546157    4960 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:39.556860    4960 out.go:177] 
	W0919 09:52:39.560970    4960 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:39.561045    4960 out.go:239] * 
	* 
	W0919 09:52:39.564214    4960 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:39.574823    4960 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.947406375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-404000 in cluster old-k8s-version-404000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:32.337597    5074 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:32.337723    5074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:32.337726    5074 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:32.337729    5074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:32.337857    5074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:32.338831    5074 out.go:303] Setting JSON to false
	I0919 09:52:32.354137    5074 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1326,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:32.354234    5074 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:32.358715    5074 out.go:177] * [old-k8s-version-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:32.365637    5074 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:32.369654    5074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:32.365702    5074 notify.go:220] Checking for updates...
	I0919 09:52:32.372580    5074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:32.375620    5074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:32.378610    5074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:32.381579    5074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:32.384962    5074 config.go:182] Loaded profile config "false-826000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:32.385038    5074 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:32.385077    5074 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:32.389531    5074 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:32.396594    5074 start.go:298] selected driver: qemu2
	I0919 09:52:32.396602    5074 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:32.396609    5074 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:32.398612    5074 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:32.401630    5074 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:32.404717    5074 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:32.404739    5074 cni.go:84] Creating CNI manager for ""
	I0919 09:52:32.404747    5074 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:52:32.404751    5074 start_flags.go:321] config:
	{Name:old-k8s-version-404000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:32.408707    5074 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:32.415650    5074 out.go:177] * Starting control plane node old-k8s-version-404000 in cluster old-k8s-version-404000
	I0919 09:52:32.419571    5074 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:52:32.419589    5074 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:52:32.419598    5074 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:32.419653    5074 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:32.419658    5074 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 09:52:32.419724    5074 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/old-k8s-version-404000/config.json ...
	I0919 09:52:32.419737    5074 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/old-k8s-version-404000/config.json: {Name:mk8fe818b196e7577fb9610fd318ed56c59288ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:32.419948    5074 start.go:365] acquiring machines lock for old-k8s-version-404000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:32.419977    5074 start.go:369] acquired machines lock for "old-k8s-version-404000" in 21.958µs
	I0919 09:52:32.419989    5074 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:32.420020    5074 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:32.428585    5074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:32.443627    5074 start.go:159] libmachine.API.Create for "old-k8s-version-404000" (driver="qemu2")
	I0919 09:52:32.443656    5074 client.go:168] LocalClient.Create starting
	I0919 09:52:32.443720    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:32.443746    5074 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:32.443759    5074 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:32.443796    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:32.443818    5074 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:32.443825    5074 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:32.444132    5074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:32.559241    5074 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:32.828778    5074 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:32.828790    5074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:32.828984    5074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:32.838079    5074 main.go:141] libmachine: STDOUT: 
	I0919 09:52:32.838092    5074 main.go:141] libmachine: STDERR: 
	I0919 09:52:32.838163    5074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2 +20000M
	I0919 09:52:32.845606    5074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:32.845619    5074 main.go:141] libmachine: STDERR: 
	I0919 09:52:32.845633    5074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:32.845639    5074 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:32.845679    5074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b3:34:5e:50:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:32.847292    5074 main.go:141] libmachine: STDOUT: 
	I0919 09:52:32.847304    5074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:32.847334    5074 client.go:171] LocalClient.Create took 403.675834ms
	I0919 09:52:34.847943    5074 start.go:128] duration metric: createHost completed in 2.427949417s
	I0919 09:52:34.847999    5074 start.go:83] releasing machines lock for "old-k8s-version-404000", held for 2.428057292s
	W0919 09:52:34.848054    5074 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:34.854785    5074 out.go:177] * Deleting "old-k8s-version-404000" in qemu2 ...
	W0919 09:52:34.877278    5074 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:34.877308    5074 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:39.878354    5074 start.go:365] acquiring machines lock for old-k8s-version-404000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:39.878477    5074 start.go:369] acquired machines lock for "old-k8s-version-404000" in 96.583µs
	I0919 09:52:39.878504    5074 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:39.878547    5074 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:39.885670    5074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:39.899811    5074 start.go:159] libmachine.API.Create for "old-k8s-version-404000" (driver="qemu2")
	I0919 09:52:39.899847    5074 client.go:168] LocalClient.Create starting
	I0919 09:52:39.899908    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:39.899949    5074 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:39.899958    5074 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:39.899994    5074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:39.900012    5074 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:39.900019    5074 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:39.900278    5074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:40.081626    5074 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:40.204373    5074 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:40.204383    5074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:40.204545    5074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:40.213293    5074 main.go:141] libmachine: STDOUT: 
	I0919 09:52:40.213312    5074 main.go:141] libmachine: STDERR: 
	I0919 09:52:40.213378    5074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2 +20000M
	I0919 09:52:40.221497    5074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:40.221525    5074 main.go:141] libmachine: STDERR: 
	I0919 09:52:40.221547    5074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:40.221562    5074 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:40.221608    5074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a5:ec:71:a3:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:40.223420    5074 main.go:141] libmachine: STDOUT: 
	I0919 09:52:40.223439    5074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:40.223457    5074 client.go:171] LocalClient.Create took 323.612417ms
	I0919 09:52:42.225483    5074 start.go:128] duration metric: createHost completed in 2.346970709s
	I0919 09:52:42.225501    5074 start.go:83] releasing machines lock for "old-k8s-version-404000", held for 2.347059583s
	W0919 09:52:42.225587    5074 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:42.237830    5074 out.go:177] 
	W0919 09:52:42.239283    5074 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:42.239291    5074 out.go:239] * 
	* 
	W0919 09:52:42.239717    5074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:42.250811    5074 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (33.904792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (10.22061125s)

                                                
                                                
-- stdout --
	* [no-preload-820000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-820000 in cluster no-preload-820000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-820000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:41.701430    5190 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:41.701567    5190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:41.701569    5190 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:41.701572    5190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:41.701695    5190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:41.702768    5190 out.go:303] Setting JSON to false
	I0919 09:52:41.717760    5190 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1335,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:41.717828    5190 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:41.723019    5190 out.go:177] * [no-preload-820000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:41.731061    5190 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:41.731139    5190 notify.go:220] Checking for updates...
	I0919 09:52:41.734968    5190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:41.738045    5190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:41.741052    5190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:41.743925    5190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:41.747039    5190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:41.750289    5190 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:41.750363    5190 config.go:182] Loaded profile config "old-k8s-version-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 09:52:41.750406    5190 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:41.754989    5190 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:41.761982    5190 start.go:298] selected driver: qemu2
	I0919 09:52:41.761991    5190 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:41.761998    5190 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:41.763989    5190 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:41.766895    5190 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:41.770044    5190 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:41.770063    5190 cni.go:84] Creating CNI manager for ""
	I0919 09:52:41.770072    5190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:52:41.770077    5190 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:52:41.770083    5190 start_flags.go:321] config:
	{Name:no-preload-820000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:41.774170    5190 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.779957    5190 out.go:177] * Starting control plane node no-preload-820000 in cluster no-preload-820000
	I0919 09:52:41.788012    5190 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:41.788100    5190 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/no-preload-820000/config.json ...
	I0919 09:52:41.788126    5190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/no-preload-820000/config.json: {Name:mkd379aaa1c7d8d815e9440f631995cce00b8cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:41.788122    5190 cache.go:107] acquiring lock: {Name:mkf3fdaea0b21c620ead164c607792b2266ce348 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788137    5190 cache.go:107] acquiring lock: {Name:mkc59de489e48c83fb92f76641d5782c417b1daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788169    5190 cache.go:107] acquiring lock: {Name:mk30db79d274176444f45c99414eda22835bc3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788123    5190 cache.go:107] acquiring lock: {Name:mkfaaef1ae9fdfa01368adf24b2ff1c2b3834997 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788303    5190 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I0919 09:52:41.788331    5190 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 09:52:41.788337    5190 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 229.708µs
	I0919 09:52:41.788348    5190 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 09:52:41.788346    5190 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I0919 09:52:41.788371    5190 start.go:365] acquiring machines lock for no-preload-820000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:41.788377    5190 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I0919 09:52:41.788378    5190 cache.go:107] acquiring lock: {Name:mk0448b245f11b8402dde314a9e8c6be07fadcce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788423    5190 cache.go:107] acquiring lock: {Name:mk64d442e9d21148a6c42a5ab0526e98a242d4b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788442    5190 cache.go:107] acquiring lock: {Name:mkf56cb23d3c250349463285875de488ee05fa3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788363    5190 cache.go:107] acquiring lock: {Name:mk41fd33083012da6fef5c543f43bbcef472f9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:41.788488    5190 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0919 09:52:41.788548    5190 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0919 09:52:41.788557    5190 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0919 09:52:41.788740    5190 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0919 09:52:41.794260    5190 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I0919 09:52:41.795237    5190 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I0919 09:52:41.795406    5190 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0919 09:52:41.795427    5190 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0919 09:52:41.795457    5190 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I0919 09:52:41.795704    5190 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0919 09:52:41.795765    5190 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0919 09:52:42.225579    5190 start.go:369] acquired machines lock for "no-preload-820000" in 437.198625ms
	I0919 09:52:42.225622    5190 start.go:93] Provisioning new machine with config: &{Name:no-preload-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:42.225674    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:42.234749    5190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:42.249059    5190 start.go:159] libmachine.API.Create for "no-preload-820000" (driver="qemu2")
	I0919 09:52:42.249086    5190 client.go:168] LocalClient.Create starting
	I0919 09:52:42.249143    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:42.249168    5190 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:42.249179    5190 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:42.249214    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:42.249232    5190 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:42.249239    5190 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:42.255296    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:42.385845    5190 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:42.407320    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2
	I0919 09:52:42.411723    5190 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:42.411734    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:42.411893    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:42.421137    5190 main.go:141] libmachine: STDOUT: 
	I0919 09:52:42.421155    5190 main.go:141] libmachine: STDERR: 
	I0919 09:52:42.421227    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2 +20000M
	I0919 09:52:42.430518    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:42.430536    5190 main.go:141] libmachine: STDERR: 
	I0919 09:52:42.430556    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:42.430564    5190 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:42.430606    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:07:61:50:34:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:42.432435    5190 main.go:141] libmachine: STDOUT: 
	I0919 09:52:42.432449    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:42.432467    5190 client.go:171] LocalClient.Create took 183.378459ms
	I0919 09:52:42.441791    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2
	I0919 09:52:42.606855    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0919 09:52:42.785582    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0919 09:52:42.785602    5190 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 997.244166ms
	I0919 09:52:42.785614    5190 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0919 09:52:42.842987    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0919 09:52:43.029848    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2
	I0919 09:52:43.298235    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2
	I0919 09:52:43.432589    5190 cache.go:162] opening:  /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0919 09:52:44.432688    5190 start.go:128] duration metric: createHost completed in 2.207006209s
	I0919 09:52:44.432748    5190 start.go:83] releasing machines lock for "no-preload-820000", held for 2.207179625s
	W0919 09:52:44.432837    5190 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:44.446369    5190 out.go:177] * Deleting "no-preload-820000" in qemu2 ...
	W0919 09:52:44.467604    5190 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:44.467634    5190 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:45.282241    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0919 09:52:45.282304    5190 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 3.494221833s
	I0919 09:52:45.282332    5190 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0919 09:52:45.808381    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0919 09:52:45.808450    5190 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 4.020377875s
	I0919 09:52:45.808482    5190 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0919 09:52:46.088431    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0919 09:52:46.088474    5190 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 4.300131916s
	I0919 09:52:46.088504    5190 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0919 09:52:46.944465    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0919 09:52:46.944516    5190 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 5.156492917s
	I0919 09:52:46.944575    5190 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0919 09:52:46.994118    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0919 09:52:46.994153    5190 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 5.2058695s
	I0919 09:52:46.994183    5190 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0919 09:52:49.476257    5190 start.go:365] acquiring machines lock for no-preload-820000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:49.485331    5190 start.go:369] acquired machines lock for "no-preload-820000" in 9.020792ms
	I0919 09:52:49.485386    5190 start.go:93] Provisioning new machine with config: &{Name:no-preload-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:49.485569    5190 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:49.492752    5190 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:49.540075    5190 start.go:159] libmachine.API.Create for "no-preload-820000" (driver="qemu2")
	I0919 09:52:49.540109    5190 client.go:168] LocalClient.Create starting
	I0919 09:52:49.540251    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:49.540295    5190 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:49.540318    5190 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:49.540373    5190 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:49.540411    5190 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:49.540426    5190 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:49.540899    5190 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:49.671468    5190 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:49.831000    5190 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:49.831009    5190 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:49.831174    5190 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:49.840177    5190 main.go:141] libmachine: STDOUT: 
	I0919 09:52:49.840197    5190 main.go:141] libmachine: STDERR: 
	I0919 09:52:49.840256    5190 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2 +20000M
	I0919 09:52:49.848923    5190 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:49.848941    5190 main.go:141] libmachine: STDERR: 
	I0919 09:52:49.848955    5190 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:49.848964    5190 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:49.849033    5190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:41:86:a8:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:49.850859    5190 main.go:141] libmachine: STDOUT: 
	I0919 09:52:49.850890    5190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:49.850902    5190 client.go:171] LocalClient.Create took 310.793ms
	I0919 09:52:51.852400    5190 start.go:128] duration metric: createHost completed in 2.366826375s
	I0919 09:52:51.852481    5190 start.go:83] releasing machines lock for "no-preload-820000", held for 2.367163792s
	W0919 09:52:51.852753    5190 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:51.869260    5190 out.go:177] 
	W0919 09:52:51.873548    5190 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:51.873591    5190 out.go:239] * 
	* 
	I0919 09:52:51.873934    5190 cache.go:157] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0919 09:52:51.873962    5190 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 10.085831958s
	I0919 09:52:51.873981    5190 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0919 09:52:51.874017    5190 cache.go:87] Successfully saved all images to host disk.
	W0919 09:52:51.875606    5190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:51.886269    5190 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (46.268958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-404000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-404000 create -f testdata/busybox.yaml: exit status 1 (31.039875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-404000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (33.24325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (32.39175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-404000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-404000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-404000 describe deploy/metrics-server -n kube-system: exit status 1 (27.139375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-404000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-404000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (27.930667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E0919 09:52:44.381663    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.873019875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-404000 in cluster old-k8s-version-404000
	* Restarting existing qemu2 VM for "old-k8s-version-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:42.676187    5276 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:42.676309    5276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:42.676312    5276 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:42.676315    5276 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:42.676454    5276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:42.677479    5276 out.go:303] Setting JSON to false
	I0919 09:52:42.693510    5276 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1336,"bootTime":1695141026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:42.693603    5276 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:42.697523    5276 out.go:177] * [old-k8s-version-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:42.707603    5276 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:42.704793    5276 notify.go:220] Checking for updates...
	I0919 09:52:42.715493    5276 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:42.723652    5276 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:42.731638    5276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:42.739672    5276 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:42.743648    5276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:42.747866    5276 config.go:182] Loaded profile config "old-k8s-version-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 09:52:42.751644    5276 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 09:52:42.755673    5276 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:42.759608    5276 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:52:42.766496    5276 start.go:298] selected driver: qemu2
	I0919 09:52:42.766506    5276 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:42.766561    5276 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:42.768323    5276 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:42.768350    5276 cni.go:84] Creating CNI manager for ""
	I0919 09:52:42.768356    5276 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:52:42.768362    5276 start_flags.go:321] config:
	{Name:old-k8s-version-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-404000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:42.771922    5276 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:42.779669    5276 out.go:177] * Starting control plane node old-k8s-version-404000 in cluster old-k8s-version-404000
	I0919 09:52:42.783664    5276 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:52:42.783691    5276 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:52:42.783702    5276 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:42.783772    5276 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:42.783778    5276 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 09:52:42.783838    5276 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/old-k8s-version-404000/config.json ...
	I0919 09:52:42.784076    5276 start.go:365] acquiring machines lock for old-k8s-version-404000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:44.433013    5276 start.go:369] acquired machines lock for "old-k8s-version-404000" in 1.648893583s
	I0919 09:52:44.433109    5276 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:52:44.433130    5276 fix.go:54] fixHost starting: 
	I0919 09:52:44.433757    5276 fix.go:102] recreateIfNeeded on old-k8s-version-404000: state=Stopped err=<nil>
	W0919 09:52:44.433804    5276 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:52:44.442314    5276 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-404000" ...
	I0919 09:52:44.449530    5276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a5:ec:71:a3:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:44.459061    5276 main.go:141] libmachine: STDOUT: 
	I0919 09:52:44.459118    5276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:44.459209    5276 fix.go:56] fixHost completed within 26.073167ms
	I0919 09:52:44.459227    5276 start.go:83] releasing machines lock for "old-k8s-version-404000", held for 26.179958ms
	W0919 09:52:44.459253    5276 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:44.459410    5276 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:44.459425    5276 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:49.461608    5276 start.go:365] acquiring machines lock for old-k8s-version-404000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:49.462115    5276 start.go:369] acquired machines lock for "old-k8s-version-404000" in 423.75µs
	I0919 09:52:49.462269    5276 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:52:49.462292    5276 fix.go:54] fixHost starting: 
	I0919 09:52:49.463055    5276 fix.go:102] recreateIfNeeded on old-k8s-version-404000: state=Stopped err=<nil>
	W0919 09:52:49.463082    5276 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:52:49.467662    5276 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-404000" ...
	I0919 09:52:49.475805    5276 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a5:ec:71:a3:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/old-k8s-version-404000/disk.qcow2
	I0919 09:52:49.485071    5276 main.go:141] libmachine: STDOUT: 
	I0919 09:52:49.485129    5276 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:49.485230    5276 fix.go:56] fixHost completed within 22.936209ms
	I0919 09:52:49.485255    5276 start.go:83] releasing machines lock for "old-k8s-version-404000", held for 23.117ms
	W0919 09:52:49.485522    5276 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:49.496992    5276 out.go:177] 
	W0919 09:52:49.501729    5276 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:49.501787    5276 out.go:239] * 
	* 
	W0919 09:52:49.504234    5276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:49.513627    5276 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-404000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (48.585708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-404000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (33.61525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-404000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.355625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-404000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (31.209833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-404000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-404000 "sudo crictl images -o json": exit status 89 (39.881459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-404000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-404000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-404000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (27.185334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-404000 --alsologtostderr -v=1: exit status 89 (43.347708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-404000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:49.761119    5336 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:49.761512    5336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:49.761518    5336 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:49.761520    5336 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:49.761655    5336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:49.761882    5336 out.go:303] Setting JSON to false
	I0919 09:52:49.761893    5336 mustload.go:65] Loading cluster: old-k8s-version-404000
	I0919 09:52:49.762077    5336 config.go:182] Loaded profile config "old-k8s-version-404000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 09:52:49.766435    5336 out.go:177] * The control plane node must be running for this command
	I0919 09:52:49.774585    5336 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-404000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-404000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (27.490334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (27.723708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-404000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.410527458s)

                                                
                                                
-- stdout --
	* [embed-certs-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-444000 in cluster embed-certs-444000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-444000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:50.224729    5362 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:50.224854    5362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:50.224857    5362 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:50.224860    5362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:50.224985    5362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:50.226071    5362 out.go:303] Setting JSON to false
	I0919 09:52:50.241274    5362 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1344,"bootTime":1695141026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:50.241360    5362 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:50.245495    5362 out.go:177] * [embed-certs-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:50.257496    5362 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:50.253577    5362 notify.go:220] Checking for updates...
	I0919 09:52:50.265531    5362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:50.273498    5362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:50.281522    5362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:50.289502    5362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:50.297507    5362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:50.301850    5362 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:50.301917    5362 config.go:182] Loaded profile config "no-preload-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:50.301964    5362 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:50.303468    5362 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:52:50.311510    5362 start.go:298] selected driver: qemu2
	I0919 09:52:50.311516    5362 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:52:50.311521    5362 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:50.313831    5362 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:52:50.318483    5362 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:52:50.321587    5362 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:50.321609    5362 cni.go:84] Creating CNI manager for ""
	I0919 09:52:50.321616    5362 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:52:50.321621    5362 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:52:50.321632    5362 start_flags.go:321] config:
	{Name:embed-certs-444000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:50.326047    5362 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:50.333524    5362 out.go:177] * Starting control plane node embed-certs-444000 in cluster embed-certs-444000
	I0919 09:52:50.336503    5362 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:50.336524    5362 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:52:50.336531    5362 cache.go:57] Caching tarball of preloaded images
	I0919 09:52:50.336594    5362 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:52:50.336600    5362 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:52:50.336669    5362 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/embed-certs-444000/config.json ...
	I0919 09:52:50.336682    5362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/embed-certs-444000/config.json: {Name:mk713ca3109cac73288e5e2fbe8d7c615fb5380d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:52:50.336873    5362 start.go:365] acquiring machines lock for embed-certs-444000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:51.852728    5362 start.go:369] acquired machines lock for "embed-certs-444000" in 1.51585875s
	I0919 09:52:51.852955    5362 start.go:93] Provisioning new machine with config: &{Name:embed-certs-444000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:51.853158    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:51.865327    5362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:51.910773    5362 start.go:159] libmachine.API.Create for "embed-certs-444000" (driver="qemu2")
	I0919 09:52:51.910817    5362 client.go:168] LocalClient.Create starting
	I0919 09:52:51.910925    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:51.910973    5362 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:51.910993    5362 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:51.911051    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:51.911087    5362 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:51.911105    5362 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:51.911677    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:52.043271    5362 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:52.197834    5362 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:52.197842    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:52.197967    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:52.206808    5362 main.go:141] libmachine: STDOUT: 
	I0919 09:52:52.206827    5362 main.go:141] libmachine: STDERR: 
	I0919 09:52:52.206892    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2 +20000M
	I0919 09:52:52.214645    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:52.214665    5362 main.go:141] libmachine: STDERR: 
	I0919 09:52:52.214686    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:52.214698    5362 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:52.214743    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:6c:da:76:bb:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:52.216505    5362 main.go:141] libmachine: STDOUT: 
	I0919 09:52:52.216521    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:52.216541    5362 client.go:171] LocalClient.Create took 305.722333ms
	I0919 09:52:54.218955    5362 start.go:128] duration metric: createHost completed in 2.3657425s
	I0919 09:52:54.219065    5362 start.go:83] releasing machines lock for "embed-certs-444000", held for 2.366298916s
	W0919 09:52:54.219123    5362 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:54.236632    5362 out.go:177] * Deleting "embed-certs-444000" in qemu2 ...
	W0919 09:52:54.261062    5362 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:54.261096    5362 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:59.263154    5362 start.go:365] acquiring machines lock for embed-certs-444000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:59.272465    5362 start.go:369] acquired machines lock for "embed-certs-444000" in 9.236334ms
	I0919 09:52:59.272516    5362 start.go:93] Provisioning new machine with config: &{Name:embed-certs-444000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:52:59.272683    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:52:59.283767    5362 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:52:59.327583    5362 start.go:159] libmachine.API.Create for "embed-certs-444000" (driver="qemu2")
	I0919 09:52:59.327618    5362 client.go:168] LocalClient.Create starting
	I0919 09:52:59.327714    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:52:59.327762    5362 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:59.327786    5362 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:59.327845    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:52:59.327879    5362 main.go:141] libmachine: Decoding PEM data...
	I0919 09:52:59.327893    5362 main.go:141] libmachine: Parsing certificate...
	I0919 09:52:59.328372    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:52:59.462385    5362 main.go:141] libmachine: Creating SSH key...
	I0919 09:52:59.543165    5362 main.go:141] libmachine: Creating Disk image...
	I0919 09:52:59.543174    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:52:59.543346    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:59.560526    5362 main.go:141] libmachine: STDOUT: 
	I0919 09:52:59.560540    5362 main.go:141] libmachine: STDERR: 
	I0919 09:52:59.560601    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2 +20000M
	I0919 09:52:59.568688    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:52:59.568719    5362 main.go:141] libmachine: STDERR: 
	I0919 09:52:59.568740    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:59.568748    5362 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:52:59.568781    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:73:cd:3d:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:52:59.570529    5362 main.go:141] libmachine: STDOUT: 
	I0919 09:52:59.570543    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:59.570557    5362 client.go:171] LocalClient.Create took 242.938833ms
	I0919 09:53:01.572735    5362 start.go:128] duration metric: createHost completed in 2.300060708s
	I0919 09:53:01.572817    5362 start.go:83] releasing machines lock for "embed-certs-444000", held for 2.300369333s
	W0919 09:53:01.573216    5362 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:01.585659    5362 out.go:177] 
	W0919 09:53:01.588938    5362 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:01.588978    5362 out.go:239] * 
	* 
	W0919 09:53:01.591233    5362 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:01.599832    5362 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (48.89975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-820000 create -f testdata/busybox.yaml: exit status 1 (30.006958ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-820000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (30.433209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (29.930958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-820000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-820000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-820000 describe deploy/metrics-server -n kube-system: exit status 1 (26.00475ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-820000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (28.308666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (7.0198175s)

                                                
                                                
-- stdout --
	* [no-preload-820000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-820000 in cluster no-preload-820000
	* Restarting existing qemu2 VM for "no-preload-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:52.313784    5400 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:52.313896    5400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:52.313899    5400 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:52.313902    5400 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:52.314023    5400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:52.314964    5400 out.go:303] Setting JSON to false
	I0919 09:52:52.329561    5400 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1346,"bootTime":1695141026,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:52:52.329627    5400 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:52:52.334075    5400 out.go:177] * [no-preload-820000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:52:52.345244    5400 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:52:52.342271    5400 notify.go:220] Checking for updates...
	I0919 09:52:52.351233    5400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:52:52.355244    5400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:52:52.358211    5400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:52:52.361255    5400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:52:52.364227    5400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:52:52.367544    5400 config.go:182] Loaded profile config "no-preload-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:52.367789    5400 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:52:52.372237    5400 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:52:52.379213    5400 start.go:298] selected driver: qemu2
	I0919 09:52:52.379219    5400 start.go:902] validating driver "qemu2" against &{Name:no-preload-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:52.379275    5400 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:52:52.381440    5400 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:52:52.381472    5400 cni.go:84] Creating CNI manager for ""
	I0919 09:52:52.381479    5400 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:52:52.381487    5400 start_flags.go:321] config:
	{Name:no-preload-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-820000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:52:52.385574    5400 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.394177    5400 out.go:177] * Starting control plane node no-preload-820000 in cluster no-preload-820000
	I0919 09:52:52.398248    5400 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:52:52.398311    5400 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/no-preload-820000/config.json ...
	I0919 09:52:52.398328    5400 cache.go:107] acquiring lock: {Name:mkfaaef1ae9fdfa01368adf24b2ff1c2b3834997 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398328    5400 cache.go:107] acquiring lock: {Name:mkf3fdaea0b21c620ead164c607792b2266ce348 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398353    5400 cache.go:107] acquiring lock: {Name:mk64d442e9d21148a6c42a5ab0526e98a242d4b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398361    5400 cache.go:107] acquiring lock: {Name:mkc59de489e48c83fb92f76641d5782c417b1daa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398394    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0919 09:52:52.398401    5400 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 76.791µs
	I0919 09:52:52.398409    5400 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0919 09:52:52.398402    5400 cache.go:107] acquiring lock: {Name:mk41fd33083012da6fef5c543f43bbcef472f9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398415    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0919 09:52:52.398416    5400 cache.go:107] acquiring lock: {Name:mkf56cb23d3c250349463285875de488ee05fa3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398424    5400 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 72.417µs
	I0919 09:52:52.398433    5400 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0919 09:52:52.398440    5400 cache.go:107] acquiring lock: {Name:mk30db79d274176444f45c99414eda22835bc3d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398488    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 09:52:52.398494    5400 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.5µs
	I0919 09:52:52.398499    5400 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 09:52:52.398489    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0919 09:52:52.398520    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0919 09:52:52.398546    5400 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 130.834µs
	I0919 09:52:52.398550    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0919 09:52:52.398552    5400 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0919 09:52:52.398537    5400 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 195.708µs
	I0919 09:52:52.398561    5400 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0919 09:52:52.398560    5400 cache.go:107] acquiring lock: {Name:mk0448b245f11b8402dde314a9e8c6be07fadcce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:52:52.398554    5400 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 115.291µs
	I0919 09:52:52.398592    5400 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0919 09:52:52.398620    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0919 09:52:52.398622    5400 cache.go:115] /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0919 09:52:52.398625    5400 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 119.167µs
	I0919 09:52:52.398632    5400 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0919 09:52:52.398628    5400 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 226.25µs
	I0919 09:52:52.398636    5400 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0919 09:52:52.398646    5400 cache.go:87] Successfully saved all images to host disk.
	I0919 09:52:52.398684    5400 start.go:365] acquiring machines lock for no-preload-820000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:54.219221    5400 start.go:369] acquired machines lock for "no-preload-820000" in 1.820508333s
	I0919 09:52:54.219334    5400 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:52:54.219374    5400 fix.go:54] fixHost starting: 
	I0919 09:52:54.220037    5400 fix.go:102] recreateIfNeeded on no-preload-820000: state=Stopped err=<nil>
	W0919 09:52:54.220084    5400 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:52:54.229625    5400 out.go:177] * Restarting existing qemu2 VM for "no-preload-820000" ...
	I0919 09:52:54.240809    5400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:41:86:a8:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:54.249960    5400 main.go:141] libmachine: STDOUT: 
	I0919 09:52:54.250044    5400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:54.250192    5400 fix.go:56] fixHost completed within 30.828875ms
	I0919 09:52:54.250220    5400 start.go:83] releasing machines lock for "no-preload-820000", held for 30.96575ms
	W0919 09:52:54.250255    5400 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:54.250511    5400 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:54.250534    5400 start.go:703] Will try again in 5 seconds ...
	I0919 09:52:59.252715    5400 start.go:365] acquiring machines lock for no-preload-820000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:52:59.253225    5400 start.go:369] acquired machines lock for "no-preload-820000" in 417.375µs
	I0919 09:52:59.253373    5400 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:52:59.253393    5400 fix.go:54] fixHost starting: 
	I0919 09:52:59.254120    5400 fix.go:102] recreateIfNeeded on no-preload-820000: state=Stopped err=<nil>
	W0919 09:52:59.254148    5400 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:52:59.258779    5400 out.go:177] * Restarting existing qemu2 VM for "no-preload-820000" ...
	I0919 09:52:59.262982    5400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:41:86:a8:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/no-preload-820000/disk.qcow2
	I0919 09:52:59.272232    5400 main.go:141] libmachine: STDOUT: 
	I0919 09:52:59.272279    5400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:52:59.272355    5400 fix.go:56] fixHost completed within 18.964333ms
	I0919 09:52:59.272385    5400 start.go:83] releasing machines lock for "no-preload-820000", held for 19.138125ms
	W0919 09:52:59.272548    5400 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:52:59.283798    5400 out.go:177] 
	W0919 09:52:59.287849    5400 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:52:59.287892    5400 out.go:239] * 
	* 
	W0919 09:52:59.290047    5400 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:52:59.299688    5400 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-820000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (48.8315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-820000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (32.250042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-820000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.449167ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (31.583333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-820000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-820000 "sudo crictl images -o json": exit status 89 (40.2385ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-820000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-820000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-820000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (27.6775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-820000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-820000 --alsologtostderr -v=1: exit status 89 (42.91ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-820000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:52:59.547706    5423 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:52:59.547833    5423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:59.547836    5423 out.go:309] Setting ErrFile to fd 2...
	I0919 09:52:59.547839    5423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:52:59.547962    5423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:52:59.548192    5423 out.go:303] Setting JSON to false
	I0919 09:52:59.548201    5423 mustload.go:65] Loading cluster: no-preload-820000
	I0919 09:52:59.548404    5423 config.go:182] Loaded profile config "no-preload-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:52:59.552740    5423 out.go:177] * The control plane node must be running for this command
	I0919 09:52:59.559700    5423 out.go:177]   To start a cluster, run: "minikube start -p no-preload-820000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-820000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (26.397208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (26.48375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.0314695s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-645000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-645000 in cluster default-k8s-diff-port-645000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:00.231212    5461 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:00.231336    5461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:00.231339    5461 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:00.231341    5461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:00.231489    5461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:00.232440    5461 out.go:303] Setting JSON to false
	I0919 09:53:00.247633    5461 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1354,"bootTime":1695141026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:53:00.247715    5461 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:53:00.252711    5461 out.go:177] * [default-k8s-diff-port-645000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:53:00.256864    5461 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:53:00.259838    5461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:53:00.256938    5461 notify.go:220] Checking for updates...
	I0919 09:53:00.265749    5461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:53:00.268841    5461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:53:00.270198    5461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:53:00.272967    5461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:53:00.276188    5461 config.go:182] Loaded profile config "embed-certs-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:00.276251    5461 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:00.276301    5461 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:53:00.280605    5461 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:53:00.287815    5461 start.go:298] selected driver: qemu2
	I0919 09:53:00.287822    5461 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:53:00.287827    5461 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:53:00.289792    5461 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:53:00.292833    5461 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:53:00.295821    5461 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:53:00.295840    5461 cni.go:84] Creating CNI manager for ""
	I0919 09:53:00.295846    5461 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:53:00.295850    5461 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:53:00.295854    5461 start_flags.go:321] config:
	{Name:default-k8s-diff-port-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-645000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:00.300134    5461 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:53:00.305794    5461 out.go:177] * Starting control plane node default-k8s-diff-port-645000 in cluster default-k8s-diff-port-645000
	I0919 09:53:00.309766    5461 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:53:00.309785    5461 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:53:00.309796    5461 cache.go:57] Caching tarball of preloaded images
	I0919 09:53:00.309851    5461 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:53:00.309856    5461 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:53:00.309912    5461 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/default-k8s-diff-port-645000/config.json ...
	I0919 09:53:00.309925    5461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/default-k8s-diff-port-645000/config.json: {Name:mk17b257b9c8cde6698899333b6cbfb6d98e7c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:53:00.310160    5461 start.go:365] acquiring machines lock for default-k8s-diff-port-645000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:01.572928    5461 start.go:369] acquired machines lock for "default-k8s-diff-port-645000" in 1.262761958s
	I0919 09:53:01.573192    5461 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-645000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:53:01.573448    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:53:01.581806    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:53:01.627392    5461 start.go:159] libmachine.API.Create for "default-k8s-diff-port-645000" (driver="qemu2")
	I0919 09:53:01.627438    5461 client.go:168] LocalClient.Create starting
	I0919 09:53:01.627537    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:53:01.627589    5461 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:01.627607    5461 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:01.627669    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:53:01.627705    5461 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:01.627719    5461 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:01.628286    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:53:01.757838    5461 main.go:141] libmachine: Creating SSH key...
	I0919 09:53:01.812818    5461 main.go:141] libmachine: Creating Disk image...
	I0919 09:53:01.812827    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:53:01.812963    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:01.821688    5461 main.go:141] libmachine: STDOUT: 
	I0919 09:53:01.821704    5461 main.go:141] libmachine: STDERR: 
	I0919 09:53:01.821767    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2 +20000M
	I0919 09:53:01.829929    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:53:01.829948    5461 main.go:141] libmachine: STDERR: 
	I0919 09:53:01.829965    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:01.829971    5461 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:53:01.830006    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:3a:45:9f:6e:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:01.831699    5461 main.go:141] libmachine: STDOUT: 
	I0919 09:53:01.831713    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:01.831730    5461 client.go:171] LocalClient.Create took 204.289208ms
	I0919 09:53:03.833897    5461 start.go:128] duration metric: createHost completed in 2.260452667s
	I0919 09:53:03.833973    5461 start.go:83] releasing machines lock for "default-k8s-diff-port-645000", held for 2.261047s
	W0919 09:53:03.834038    5461 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:03.847530    5461 out.go:177] * Deleting "default-k8s-diff-port-645000" in qemu2 ...
	W0919 09:53:03.868587    5461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:03.868622    5461 start.go:703] Will try again in 5 seconds ...
	I0919 09:53:08.870735    5461 start.go:365] acquiring machines lock for default-k8s-diff-port-645000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:08.884377    5461 start.go:369] acquired machines lock for "default-k8s-diff-port-645000" in 13.557459ms
	I0919 09:53:08.884437    5461 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-645000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:53:08.884638    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:53:08.897174    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:53:08.941442    5461 start.go:159] libmachine.API.Create for "default-k8s-diff-port-645000" (driver="qemu2")
	I0919 09:53:08.941476    5461 client.go:168] LocalClient.Create starting
	I0919 09:53:08.941579    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:53:08.941628    5461 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:08.941643    5461 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:08.941710    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:53:08.941746    5461 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:08.941761    5461 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:08.942188    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:53:09.073472    5461 main.go:141] libmachine: Creating SSH key...
	I0919 09:53:09.169141    5461 main.go:141] libmachine: Creating Disk image...
	I0919 09:53:09.169149    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:53:09.169287    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:09.185681    5461 main.go:141] libmachine: STDOUT: 
	I0919 09:53:09.185706    5461 main.go:141] libmachine: STDERR: 
	I0919 09:53:09.185782    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2 +20000M
	I0919 09:53:09.193710    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:53:09.193730    5461 main.go:141] libmachine: STDERR: 
	I0919 09:53:09.193750    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:09.193759    5461 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:53:09.193799    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:50:b4:ab:c5:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:09.195593    5461 main.go:141] libmachine: STDOUT: 
	I0919 09:53:09.195609    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:09.195622    5461 client.go:171] LocalClient.Create took 254.135959ms
	I0919 09:53:11.197805    5461 start.go:128] duration metric: createHost completed in 2.313178459s
	I0919 09:53:11.197887    5461 start.go:83] releasing machines lock for "default-k8s-diff-port-645000", held for 2.3135215s
	W0919 09:53:11.198275    5461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:11.212022    5461 out.go:177] 
	W0919 09:53:11.215162    5461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:11.215217    5461 out.go:239] * 
	* 
	W0919 09:53:11.217706    5461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:11.228878    5461 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (47.894625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-444000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-444000 create -f testdata/busybox.yaml: exit status 1 (30.092083ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-444000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (33.43775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (31.659292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-444000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-444000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-444000 describe deploy/metrics-server -n kube-system: exit status 1 (26.476125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-444000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-444000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (27.347625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.910536834s)

                                                
                                                
-- stdout --
	* [embed-certs-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-444000 in cluster embed-certs-444000
	* Restarting existing qemu2 VM for "embed-certs-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-444000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:02.038113    5490 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:02.038237    5490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:02.038240    5490 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:02.038242    5490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:02.038359    5490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:02.039361    5490 out.go:303] Setting JSON to false
	I0919 09:53:02.054286    5490 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1356,"bootTime":1695141026,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:53:02.054378    5490 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:53:02.058824    5490 out.go:177] * [embed-certs-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:53:02.069745    5490 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:53:02.065726    5490 notify.go:220] Checking for updates...
	I0919 09:53:02.077677    5490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:53:02.084724    5490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:53:02.092788    5490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:53:02.096761    5490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:53:02.103731    5490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:53:02.108170    5490 config.go:182] Loaded profile config "embed-certs-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:02.108448    5490 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:53:02.112731    5490 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:53:02.119713    5490 start.go:298] selected driver: qemu2
	I0919 09:53:02.119719    5490 start.go:902] validating driver "qemu2" against &{Name:embed-certs-444000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:02.119770    5490 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:53:02.122025    5490 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:53:02.122051    5490 cni.go:84] Creating CNI manager for ""
	I0919 09:53:02.122059    5490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:53:02.122066    5490 start_flags.go:321] config:
	{Name:embed-certs-444000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-444000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:02.126288    5490 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:53:02.133763    5490 out.go:177] * Starting control plane node embed-certs-444000 in cluster embed-certs-444000
	I0919 09:53:02.137630    5490 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:53:02.137652    5490 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:53:02.137665    5490 cache.go:57] Caching tarball of preloaded images
	I0919 09:53:02.137723    5490 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:53:02.137729    5490 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:53:02.137795    5490 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/embed-certs-444000/config.json ...
	I0919 09:53:02.138131    5490 start.go:365] acquiring machines lock for embed-certs-444000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:03.834120    5490 start.go:369] acquired machines lock for "embed-certs-444000" in 1.695992583s
	I0919 09:53:03.834293    5490 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:03.834312    5490 fix.go:54] fixHost starting: 
	I0919 09:53:03.835024    5490 fix.go:102] recreateIfNeeded on embed-certs-444000: state=Stopped err=<nil>
	W0919 09:53:03.835076    5490 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:03.840523    5490 out.go:177] * Restarting existing qemu2 VM for "embed-certs-444000" ...
	I0919 09:53:03.850654    5490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:73:cd:3d:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:53:03.859506    5490 main.go:141] libmachine: STDOUT: 
	I0919 09:53:03.859596    5490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:03.859730    5490 fix.go:56] fixHost completed within 25.407125ms
	I0919 09:53:03.859753    5490 start.go:83] releasing machines lock for "embed-certs-444000", held for 25.594583ms
	W0919 09:53:03.859791    5490 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:03.860032    5490 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:03.860053    5490 start.go:703] Will try again in 5 seconds ...
	I0919 09:53:08.862284    5490 start.go:365] acquiring machines lock for embed-certs-444000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:08.862714    5490 start.go:369] acquired machines lock for "embed-certs-444000" in 329.708µs
	I0919 09:53:08.862841    5490 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:08.862862    5490 fix.go:54] fixHost starting: 
	I0919 09:53:08.863539    5490 fix.go:102] recreateIfNeeded on embed-certs-444000: state=Stopped err=<nil>
	W0919 09:53:08.863565    5490 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:08.871212    5490 out.go:177] * Restarting existing qemu2 VM for "embed-certs-444000" ...
	I0919 09:53:08.875366    5490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:73:cd:3d:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/embed-certs-444000/disk.qcow2
	I0919 09:53:08.884111    5490 main.go:141] libmachine: STDOUT: 
	I0919 09:53:08.884178    5490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:08.884279    5490 fix.go:56] fixHost completed within 21.415875ms
	I0919 09:53:08.884303    5490 start.go:83] releasing machines lock for "embed-certs-444000", held for 21.565916ms
	W0919 09:53:08.884492    5490 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-444000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:08.897163    5490 out.go:177] 
	W0919 09:53:08.901884    5490 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:08.901913    5490 out.go:239] * 
	* 
	W0919 09:53:08.903966    5490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:08.917152    5490 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-444000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (50.029375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-444000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (35.269417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-444000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-444000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-444000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.485583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-444000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-444000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (31.7675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-444000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-444000 "sudo crictl images -o json": exit status 89 (41.992209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-444000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-444000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-444000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (28.045917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-444000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-444000 --alsologtostderr -v=1: exit status 89 (43.755375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-444000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:09.168530    5514 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:09.168695    5514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:09.168700    5514 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:09.168702    5514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:09.168849    5514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:09.169075    5514 out.go:303] Setting JSON to false
	I0919 09:53:09.169085    5514 mustload.go:65] Loading cluster: embed-certs-444000
	I0919 09:53:09.169283    5514 config.go:182] Loaded profile config "embed-certs-444000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:09.174166    5514 out.go:177] * The control plane node must be running for this command
	I0919 09:53:09.182220    5514 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-444000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-444000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (26.526667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (26.960125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-444000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.323755625s)

                                                
                                                
-- stdout --
	* [newest-cni-104000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-104000 in cluster newest-cni-104000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-104000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:09.615999    5540 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:09.616122    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:09.616126    5540 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:09.616128    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:09.616272    5540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:09.617336    5540 out.go:303] Setting JSON to false
	I0919 09:53:09.632607    5540 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1363,"bootTime":1695141026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:53:09.632706    5540 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:53:09.637095    5540 out.go:177] * [newest-cni-104000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:53:09.644164    5540 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:53:09.644202    5540 notify.go:220] Checking for updates...
	I0919 09:53:09.648124    5540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:53:09.652106    5540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:53:09.655324    5540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:53:09.658169    5540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:53:09.661182    5540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:53:09.664582    5540 config.go:182] Loaded profile config "default-k8s-diff-port-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:09.664648    5540 config.go:182] Loaded profile config "multinode-120000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:09.664695    5540 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:53:09.669133    5540 out.go:177] * Using the qemu2 driver based on user configuration
	I0919 09:53:09.676175    5540 start.go:298] selected driver: qemu2
	I0919 09:53:09.676185    5540 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:53:09.676192    5540 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:53:09.678270    5540 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0919 09:53:09.678290    5540 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 09:53:09.686116    5540 out.go:177] * Automatically selected the socket_vmnet network
	I0919 09:53:09.689257    5540 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 09:53:09.689283    5540 cni.go:84] Creating CNI manager for ""
	I0919 09:53:09.689293    5540 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:53:09.689297    5540 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 09:53:09.689305    5540 start_flags.go:321] config:
	{Name:newest-cni-104000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:09.693539    5540 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:53:09.700149    5540 out.go:177] * Starting control plane node newest-cni-104000 in cluster newest-cni-104000
	I0919 09:53:09.703149    5540 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:53:09.703168    5540 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:53:09.703179    5540 cache.go:57] Caching tarball of preloaded images
	I0919 09:53:09.703255    5540 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:53:09.703261    5540 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:53:09.703330    5540 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/newest-cni-104000/config.json ...
	I0919 09:53:09.703343    5540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/newest-cni-104000/config.json: {Name:mkb6c2936fcb3bba9753445bbb252eafb73e346d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:53:09.703561    5540 start.go:365] acquiring machines lock for newest-cni-104000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:11.198071    5540 start.go:369] acquired machines lock for "newest-cni-104000" in 1.494509167s
	I0919 09:53:11.198317    5540 start.go:93] Provisioning new machine with config: &{Name:newest-cni-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:53:11.198538    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:53:11.208048    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:53:11.253641    5540 start.go:159] libmachine.API.Create for "newest-cni-104000" (driver="qemu2")
	I0919 09:53:11.253684    5540 client.go:168] LocalClient.Create starting
	I0919 09:53:11.253801    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:53:11.253852    5540 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:11.253876    5540 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:11.253939    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:53:11.253979    5540 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:11.253997    5540 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:11.254566    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:53:11.387617    5540 main.go:141] libmachine: Creating SSH key...
	I0919 09:53:11.509068    5540 main.go:141] libmachine: Creating Disk image...
	I0919 09:53:11.509079    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:53:11.509243    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:11.518388    5540 main.go:141] libmachine: STDOUT: 
	I0919 09:53:11.518407    5540 main.go:141] libmachine: STDERR: 
	I0919 09:53:11.518457    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2 +20000M
	I0919 09:53:11.527691    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:53:11.527703    5540 main.go:141] libmachine: STDERR: 
	I0919 09:53:11.527717    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:11.527722    5540 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:53:11.527765    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:78:a3:18:77:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:11.529361    5540 main.go:141] libmachine: STDOUT: 
	I0919 09:53:11.529375    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:11.529395    5540 client.go:171] LocalClient.Create took 275.710084ms
	I0919 09:53:13.531570    5540 start.go:128] duration metric: createHost completed in 2.333036583s
	I0919 09:53:13.531645    5540 start.go:83] releasing machines lock for "newest-cni-104000", held for 2.33358325s
	W0919 09:53:13.531722    5540 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:13.547251    5540 out.go:177] * Deleting "newest-cni-104000" in qemu2 ...
	W0919 09:53:13.569276    5540 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:13.569313    5540 start.go:703] Will try again in 5 seconds ...
	I0919 09:53:18.571418    5540 start.go:365] acquiring machines lock for newest-cni-104000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:18.584016    5540 start.go:369] acquired machines lock for "newest-cni-104000" in 12.5125ms
	I0919 09:53:18.584097    5540 start.go:93] Provisioning new machine with config: &{Name:newest-cni-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 09:53:18.584347    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0919 09:53:18.592405    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 09:53:18.636877    5540 start.go:159] libmachine.API.Create for "newest-cni-104000" (driver="qemu2")
	I0919 09:53:18.636906    5540 client.go:168] LocalClient.Create starting
	I0919 09:53:18.637049    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/ca.pem
	I0919 09:53:18.637102    5540 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:18.637125    5540 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:18.637193    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17240-943/.minikube/certs/cert.pem
	I0919 09:53:18.637231    5540 main.go:141] libmachine: Decoding PEM data...
	I0919 09:53:18.637246    5540 main.go:141] libmachine: Parsing certificate...
	I0919 09:53:18.637747    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17240-943/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I0919 09:53:18.771963    5540 main.go:141] libmachine: Creating SSH key...
	I0919 09:53:18.853826    5540 main.go:141] libmachine: Creating Disk image...
	I0919 09:53:18.853834    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0919 09:53:18.853988    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2.raw /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:18.863133    5540 main.go:141] libmachine: STDOUT: 
	I0919 09:53:18.863150    5540 main.go:141] libmachine: STDERR: 
	I0919 09:53:18.863212    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2 +20000M
	I0919 09:53:18.874430    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0919 09:53:18.874453    5540 main.go:141] libmachine: STDERR: 
	I0919 09:53:18.874466    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:18.874471    5540 main.go:141] libmachine: Starting QEMU VM...
	I0919 09:53:18.874508    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:d7:6b:35:b2:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:18.876123    5540 main.go:141] libmachine: STDOUT: 
	I0919 09:53:18.876136    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:18.876150    5540 client.go:171] LocalClient.Create took 239.23175ms
	I0919 09:53:20.878331    5540 start.go:128] duration metric: createHost completed in 2.293982542s
	I0919 09:53:20.878414    5540 start.go:83] releasing machines lock for "newest-cni-104000", held for 2.294410167s
	W0919 09:53:20.878887    5540 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-104000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:20.884724    5540 out.go:177] 
	W0919 09:53:20.891688    5540 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:20.891713    5540 out.go:239] * 
	* 
	W0919 09:53:20.894286    5540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:20.901668    5540 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (65.339542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-104000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-645000 create -f testdata/busybox.yaml: exit status 1 (29.809667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-645000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (31.389417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (30.385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-645000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-645000 describe deploy/metrics-server -n kube-system: exit status 1 (26.362583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-645000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-645000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (28.669167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.985825417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-645000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-645000 in cluster default-k8s-diff-port-645000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-645000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-645000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:11.666127    5568 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:11.666256    5568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:11.666259    5568 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:11.666262    5568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:11.666386    5568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:11.667370    5568 out.go:303] Setting JSON to false
	I0919 09:53:11.682474    5568 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1365,"bootTime":1695141026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:53:11.682575    5568 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:53:11.686927    5568 out.go:177] * [default-k8s-diff-port-645000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:53:11.693951    5568 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:53:11.694020    5568 notify.go:220] Checking for updates...
	I0919 09:53:11.697889    5568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:53:11.701922    5568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:53:11.704892    5568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:53:11.707846    5568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:53:11.710878    5568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:53:11.714271    5568 config.go:182] Loaded profile config "default-k8s-diff-port-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:11.714535    5568 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:53:11.718873    5568 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:53:11.725878    5568 start.go:298] selected driver: qemu2
	I0919 09:53:11.725893    5568 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-645000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:11.725984    5568 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:53:11.727990    5568 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 09:53:11.728018    5568 cni.go:84] Creating CNI manager for ""
	I0919 09:53:11.728026    5568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:53:11.728031    5568 start_flags.go:321] config:
	{Name:default-k8s-diff-port-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-6450
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:11.731969    5568 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:53:11.738870    5568 out.go:177] * Starting control plane node default-k8s-diff-port-645000 in cluster default-k8s-diff-port-645000
	I0919 09:53:11.741808    5568 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:53:11.741827    5568 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:53:11.741835    5568 cache.go:57] Caching tarball of preloaded images
	I0919 09:53:11.741883    5568 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:53:11.741888    5568 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:53:11.741956    5568 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/default-k8s-diff-port-645000/config.json ...
	I0919 09:53:11.742335    5568 start.go:365] acquiring machines lock for default-k8s-diff-port-645000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:13.531853    5568 start.go:369] acquired machines lock for "default-k8s-diff-port-645000" in 1.789462292s
	I0919 09:53:13.531932    5568 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:13.531962    5568 fix.go:54] fixHost starting: 
	I0919 09:53:13.532669    5568 fix.go:102] recreateIfNeeded on default-k8s-diff-port-645000: state=Stopped err=<nil>
	W0919 09:53:13.532716    5568 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:13.537225    5568 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-645000" ...
	I0919 09:53:13.550435    5568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:50:b4:ab:c5:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:13.559786    5568 main.go:141] libmachine: STDOUT: 
	I0919 09:53:13.559867    5568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:13.560002    5568 fix.go:56] fixHost completed within 28.042125ms
	I0919 09:53:13.560022    5568 start.go:83] releasing machines lock for "default-k8s-diff-port-645000", held for 28.134333ms
	W0919 09:53:13.560063    5568 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:13.560273    5568 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:13.560296    5568 start.go:703] Will try again in 5 seconds ...
	I0919 09:53:18.562519    5568 start.go:365] acquiring machines lock for default-k8s-diff-port-645000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:18.562969    5568 start.go:369] acquired machines lock for "default-k8s-diff-port-645000" in 350.583µs
	I0919 09:53:18.563108    5568 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:18.563129    5568 fix.go:54] fixHost starting: 
	I0919 09:53:18.563840    5568 fix.go:102] recreateIfNeeded on default-k8s-diff-port-645000: state=Stopped err=<nil>
	W0919 09:53:18.563868    5568 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:18.571589    5568 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-645000" ...
	I0919 09:53:18.574726    5568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:50:b4:ab:c5:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/default-k8s-diff-port-645000/disk.qcow2
	I0919 09:53:18.583771    5568 main.go:141] libmachine: STDOUT: 
	I0919 09:53:18.583831    5568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:18.583917    5568 fix.go:56] fixHost completed within 20.790458ms
	I0919 09:53:18.583939    5568 start.go:83] releasing machines lock for "default-k8s-diff-port-645000", held for 20.942334ms
	W0919 09:53:18.584182    5568 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-645000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-645000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:18.600544    5568 out.go:177] 
	W0919 09:53:18.604625    5568 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:18.604642    5568 out.go:239] * 
	* 
	W0919 09:53:18.606136    5568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:18.616568    5568 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-645000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (44.878208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-645000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (31.935917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-645000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.130458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-645000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-645000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (31.259333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-645000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-645000 "sudo crictl images -o json": exit status 89 (42.448417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-645000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-645000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-645000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (28.05ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-645000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-645000 --alsologtostderr -v=1: exit status 89 (45.784167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-645000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:18.862521    5595 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:18.862652    5595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:18.862655    5595 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:18.862658    5595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:18.862799    5595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:18.863006    5595 out.go:303] Setting JSON to false
	I0919 09:53:18.863017    5595 mustload.go:65] Loading cluster: default-k8s-diff-port-645000
	I0919 09:53:18.863227    5595 config.go:182] Loaded profile config "default-k8s-diff-port-645000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:18.867550    5595 out.go:177] * The control plane node must be running for this command
	I0919 09:53:18.877614    5595 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-645000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-645000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (26.545458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (27.435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-645000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.184141708s)

                                                
                                                
-- stdout --
	* [newest-cni-104000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-104000 in cluster newest-cni-104000
	* Restarting existing qemu2 VM for "newest-cni-104000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-104000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:21.217248    5631 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:21.217352    5631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:21.217355    5631 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:21.217358    5631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:21.217487    5631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:21.218527    5631 out.go:303] Setting JSON to false
	I0919 09:53:21.233574    5631 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1375,"bootTime":1695141026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:53:21.233635    5631 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:53:21.238311    5631 out.go:177] * [newest-cni-104000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:53:21.246309    5631 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:53:21.250267    5631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:53:21.246378    5631 notify.go:220] Checking for updates...
	I0919 09:53:21.257220    5631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:53:21.260260    5631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:53:21.263301    5631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:53:21.266261    5631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:53:21.269522    5631 config.go:182] Loaded profile config "newest-cni-104000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:21.269821    5631 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:53:21.274194    5631 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:53:21.281251    5631 start.go:298] selected driver: qemu2
	I0919 09:53:21.281259    5631 start.go:902] validating driver "qemu2" against &{Name:newest-cni-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-104000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:21.281312    5631 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:53:21.283479    5631 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 09:53:21.283504    5631 cni.go:84] Creating CNI manager for ""
	I0919 09:53:21.283516    5631 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:53:21.283523    5631 start_flags.go:321] config:
	{Name:newest-cni-104000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-104000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:53:21.287709    5631 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:53:21.295174    5631 out.go:177] * Starting control plane node newest-cni-104000 in cluster newest-cni-104000
	I0919 09:53:21.299188    5631 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:53:21.299204    5631 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:53:21.299212    5631 cache.go:57] Caching tarball of preloaded images
	I0919 09:53:21.299267    5631 preload.go:174] Found /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0919 09:53:21.299273    5631 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:53:21.299338    5631 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/newest-cni-104000/config.json ...
	I0919 09:53:21.299711    5631 start.go:365] acquiring machines lock for newest-cni-104000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:21.299742    5631 start.go:369] acquired machines lock for "newest-cni-104000" in 26µs
	I0919 09:53:21.299753    5631 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:21.299758    5631 fix.go:54] fixHost starting: 
	I0919 09:53:21.299871    5631 fix.go:102] recreateIfNeeded on newest-cni-104000: state=Stopped err=<nil>
	W0919 09:53:21.299879    5631 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:21.304289    5631 out.go:177] * Restarting existing qemu2 VM for "newest-cni-104000" ...
	I0919 09:53:21.312200    5631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:d7:6b:35:b2:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:21.314012    5631 main.go:141] libmachine: STDOUT: 
	I0919 09:53:21.314032    5631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:21.314059    5631 fix.go:56] fixHost completed within 14.301417ms
	I0919 09:53:21.314063    5631 start.go:83] releasing machines lock for "newest-cni-104000", held for 14.316833ms
	W0919 09:53:21.314068    5631 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:21.314107    5631 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:21.314111    5631 start.go:703] Will try again in 5 seconds ...
	I0919 09:53:26.316279    5631 start.go:365] acquiring machines lock for newest-cni-104000: {Name:mk731f293d5b39390bcf4e15f4078ebe7c03576e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 09:53:26.316725    5631 start.go:369] acquired machines lock for "newest-cni-104000" in 351.541µs
	I0919 09:53:26.316877    5631 start.go:96] Skipping create...Using existing machine configuration
	I0919 09:53:26.316899    5631 fix.go:54] fixHost starting: 
	I0919 09:53:26.317560    5631 fix.go:102] recreateIfNeeded on newest-cni-104000: state=Stopped err=<nil>
	W0919 09:53:26.317586    5631 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 09:53:26.325012    5631 out.go:177] * Restarting existing qemu2 VM for "newest-cni-104000" ...
	I0919 09:53:26.330260    5631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:d7:6b:35:b2:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17240-943/.minikube/machines/newest-cni-104000/disk.qcow2
	I0919 09:53:26.338941    5631 main.go:141] libmachine: STDOUT: 
	I0919 09:53:26.338992    5631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0919 09:53:26.339063    5631 fix.go:56] fixHost completed within 22.167125ms
	I0919 09:53:26.339080    5631 start.go:83] releasing machines lock for "newest-cni-104000", held for 22.334042ms
	W0919 09:53:26.339259    5631 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-104000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-104000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0919 09:53:26.347988    5631 out.go:177] 
	W0919 09:53:26.352069    5631 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0919 09:53:26.352099    5631 out.go:239] * 
	* 
	W0919 09:53:26.355001    5631 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:53:26.362942    5631 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-104000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (64.523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-104000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-104000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-104000 "sudo crictl images -o json": exit status 89 (42.414667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-104000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-104000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-104000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (27.521542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-104000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-104000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-104000 --alsologtostderr -v=1: exit status 89 (38.961916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-104000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:53:26.539632    5647 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:53:26.539769    5647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:26.539773    5647 out.go:309] Setting ErrFile to fd 2...
	I0919 09:53:26.539775    5647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:53:26.539912    5647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:53:26.540153    5647 out.go:303] Setting JSON to false
	I0919 09:53:26.540162    5647 mustload.go:65] Loading cluster: newest-cni-104000
	I0919 09:53:26.540369    5647 config.go:182] Loaded profile config "newest-cni-104000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:53:26.543610    5647 out.go:177] * The control plane node must be running for this command
	I0919 09:53:26.547730    5647 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-104000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-104000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (27.60325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-104000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (27.367333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-104000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.09s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.2/json-events 13.18
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.36
30 TestHyperKitDriverInstallOrUpdate 8.52
33 TestErrorSpam/setup 28.63
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.66
37 TestErrorSpam/unpause 0.61
38 TestErrorSpam/stop 3.23
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 44.27
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 36.92
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
50 TestFunctional/serial/CacheCmd/cache/add_local 1.28
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.92
55 TestFunctional/serial/CacheCmd/cache/delete 0.06
56 TestFunctional/serial/MinikubeKubectlCmd 0.4
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.53
58 TestFunctional/serial/ExtraConfig 36.58
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.69
61 TestFunctional/serial/LogsFileCmd 0.61
62 TestFunctional/serial/InvalidService 4.33
64 TestFunctional/parallel/ConfigCmd 0.2
65 TestFunctional/parallel/DashboardCmd 13.35
66 TestFunctional/parallel/DryRun 0.22
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.25
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 24.29
76 TestFunctional/parallel/SSHCmd 0.13
77 TestFunctional/parallel/CpCmd 0.28
79 TestFunctional/parallel/FileSync 0.08
80 TestFunctional/parallel/CertSync 0.43
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
88 TestFunctional/parallel/License 0.32
89 TestFunctional/parallel/Version/short 0.04
90 TestFunctional/parallel/Version/components 0.17
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
95 TestFunctional/parallel/ImageCommands/ImageBuild 1.77
96 TestFunctional/parallel/ImageCommands/Setup 1.84
97 TestFunctional/parallel/DockerEnv/bash 0.43
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
101 TestFunctional/parallel/ServiceCmd/DeployApp 12.12
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.21
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.56
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.66
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.59
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
114 TestFunctional/parallel/ServiceCmd/List 0.09
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
117 TestFunctional/parallel/ServiceCmd/Format 0.11
118 TestFunctional/parallel/ServiceCmd/URL 0.11
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
126 TestFunctional/parallel/ProfileCmd/profile_list 0.15
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
128 TestFunctional/parallel/MountCmd/any-port 5.2
129 TestFunctional/parallel/MountCmd/specific-port 0.78
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 30.11
138 TestImageBuild/serial/NormalBuild 1.06
140 TestImageBuild/serial/BuildWithDockerIgnore 0.13
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 66.4
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.91
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.25
151 TestJSONOutput/start/Command 43.68
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.28
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.22
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 9.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.33
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 61.5
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.13
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
273 TestStartStop/group/no-preload/serial/Stop 0.06
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.08
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-618000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-618000: exit status 85 (91.383042ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-618000 | jenkins | v1.31.2 | 19 Sep 23 09:33 PDT |          |
	|         | -p download-only-618000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:33:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:33:37.634767    2053 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:33:37.634906    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:37.634909    2053 out.go:309] Setting ErrFile to fd 2...
	I0919 09:33:37.634912    2053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:37.635036    2053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	W0919 09:33:37.635132    2053 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: no such file or directory
	I0919 09:33:37.636303    2053 out.go:303] Setting JSON to true
	I0919 09:33:37.653131    2053 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":191,"bootTime":1695141026,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:33:37.653201    2053 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:33:37.660329    2053 out.go:97] [download-only-618000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:33:37.663223    2053 out.go:169] MINIKUBE_LOCATION=17240
	I0919 09:33:37.660501    2053 notify.go:220] Checking for updates...
	W0919 09:33:37.660523    2053 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 09:33:37.670239    2053 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:33:37.673276    2053 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:33:37.676267    2053 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:33:37.679295    2053 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	W0919 09:33:37.685229    2053 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 09:33:37.685413    2053 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:33:37.690292    2053 out.go:97] Using the qemu2 driver based on user configuration
	I0919 09:33:37.690299    2053 start.go:298] selected driver: qemu2
	I0919 09:33:37.690313    2053 start.go:902] validating driver "qemu2" against <nil>
	I0919 09:33:37.690375    2053 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 09:33:37.692371    2053 out.go:169] Automatically selected the socket_vmnet network
	I0919 09:33:37.698462    2053 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0919 09:33:37.698556    2053 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 09:33:37.698619    2053 cni.go:84] Creating CNI manager for ""
	I0919 09:33:37.698635    2053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 09:33:37.698640    2053 start_flags.go:321] config:
	{Name:download-only-618000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-618000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:33:37.704146    2053 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:33:37.708285    2053 out.go:97] Downloading VM boot image ...
	I0919 09:33:37.708314    2053 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I0919 09:33:46.142021    2053 out.go:97] Starting control plane node download-only-618000 in cluster download-only-618000
	I0919 09:33:46.142046    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:46.194539    2053 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:33:46.194548    2053 cache.go:57] Caching tarball of preloaded images
	I0919 09:33:46.194740    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:46.200366    2053 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0919 09:33:46.200372    2053 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:46.282691    2053 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0919 09:33:56.521599    2053 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:56.521727    2053 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:57.160573    2053 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 09:33:57.160762    2053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/download-only-618000/config.json ...
	I0919 09:33:57.160781    2053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/download-only-618000/config.json: {Name:mk6e0f8ffa2114774311c1ac6767974f1c2debb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 09:33:57.160989    2053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 09:33:57.161153    2053 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0919 09:33:58.469834    2053 out.go:169] 
	W0919 09:33:58.474991    2053 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0 0x107d257e0] Decompressors:map[bz2:0x140005ae380 gz:0x140005ae388 tar:0x140005ae300 tar.bz2:0x140005ae340 tar.gz:0x140005ae350 tar.xz:0x140005ae360 tar.zst:0x140005ae370 tbz2:0x140005ae340 tgz:0x140005ae350 txz:0x140005ae360 tzst:0x140005ae370 xz:0x140005ae390 zip:0x140005ae3a0 zst:0x140005ae398] Getters:map[file:0x140009d0b70 http:0x140000aa8c0 https:0x140000aa910] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0919 09:33:58.475021    2053 out_reason.go:110] 
	W0919 09:33:58.482970    2053 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 09:33:58.486931    2053 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-618000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (13.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-618000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-618000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 : (13.181638458s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (13.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-618000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-618000: exit status 85 (74.588125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-618000 | jenkins | v1.31.2 | 19 Sep 23 09:33 PDT |          |
	|         | -p download-only-618000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-618000 | jenkins | v1.31.2 | 19 Sep 23 09:33 PDT |          |
	|         | -p download-only-618000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 09:33:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 09:33:58.661051    2063 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:33:58.661157    2063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:58.661159    2063 out.go:309] Setting ErrFile to fd 2...
	I0919 09:33:58.661163    2063 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:33:58.661283    2063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	W0919 09:33:58.661355    2063 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17240-943/.minikube/config/config.json: no such file or directory
	I0919 09:33:58.662235    2063 out.go:303] Setting JSON to true
	I0919 09:33:58.677159    2063 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":212,"bootTime":1695141026,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:33:58.677239    2063 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:33:58.682651    2063 out.go:97] [download-only-618000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:33:58.686535    2063 out.go:169] MINIKUBE_LOCATION=17240
	I0919 09:33:58.682754    2063 notify.go:220] Checking for updates...
	I0919 09:33:58.692637    2063 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:33:58.695561    2063 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:33:58.698590    2063 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:33:58.701617    2063 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	W0919 09:33:58.707554    2063 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 09:33:58.707824    2063 config.go:182] Loaded profile config "download-only-618000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0919 09:33:58.707855    2063 start.go:810] api.Load failed for download-only-618000: filestore "download-only-618000": Docker machine "download-only-618000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0919 09:33:58.707904    2063 driver.go:373] Setting default libvirt URI to qemu:///system
	W0919 09:33:58.707920    2063 start.go:810] api.Load failed for download-only-618000: filestore "download-only-618000": Docker machine "download-only-618000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0919 09:33:58.711550    2063 out.go:97] Using the qemu2 driver based on existing profile
	I0919 09:33:58.711556    2063 start.go:298] selected driver: qemu2
	I0919 09:33:58.711560    2063 start.go:902] validating driver "qemu2" against &{Name:download-only-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-618000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:33:58.713548    2063 cni.go:84] Creating CNI manager for ""
	I0919 09:33:58.713563    2063 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 09:33:58.713571    2063 start_flags.go:321] config:
	{Name:download-only-618000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-618000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:33:58.717355    2063 iso.go:125] acquiring lock: {Name:mka8e023e06bcb4803b8b8b48f1b2d6ef6b15681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 09:33:58.720605    2063 out.go:97] Starting control plane node download-only-618000 in cluster download-only-618000
	I0919 09:33:58.720612    2063 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:33:58.772209    2063 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:33:58.772224    2063 cache.go:57] Caching tarball of preloaded images
	I0919 09:33:58.772365    2063 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:33:58.779147    2063 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0919 09:33:58.779160    2063 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:33:58.858620    2063 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0919 09:34:05.086373    2063 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:34:05.086518    2063 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17240-943/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0919 09:34:05.669231    2063 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 09:34:05.669300    2063 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/download-only-618000/config.json ...
	I0919 09:34:05.669556    2063 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 09:34:05.669745    2063 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17240-943/.minikube/cache/darwin/arm64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-618000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-618000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-265000 --alsologtostderr --binary-mirror http://127.0.0.1:49356 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-265000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.52s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.52s)

                                                
                                    
x
+
TestErrorSpam/setup (28.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-163000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-163000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 --driver=qemu2 : (28.628026167s)
--- PASS: TestErrorSpam/setup (28.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 stop: (3.06685275s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-163000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-163000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17240-943/.minikube/files/etc/test/nested/copy/2051/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-085000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (44.267780417s)
--- PASS: TestFunctional/serial/StartWithProxy (44.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-085000 --alsologtostderr -v=8: (36.915812375s)
functional_test.go:659: soft start took 36.916205667s for "functional-085000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-085000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:3.1: (1.287302s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:3.3: (1.183079791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 cache add registry.k8s.io/pause:latest: (1.080220792s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local569555502/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache add minikube-local-cache-test:functional-085000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache delete minikube-local-cache-test:functional-085000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-085000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (74.1145ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 kubectl -- --context functional-085000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-085000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-085000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.576400917s)
functional_test.go:757: restart took 36.576545167s for "functional-085000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-085000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1564267750/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-085000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-085000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-085000: exit status 115 (106.464291ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30309 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-085000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-085000 delete -f testdata/invalidsvc.yaml: (1.097088s)
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 config get cpus: exit status 14 (28.180708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 config get cpus: exit status 14 (27.766625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-085000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-085000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2826: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-085000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.318708ms)

                                                
                                                
-- stdout --
	* [functional-085000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:38:41.664668    2802 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:38:41.664797    2802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.664800    2802 out.go:309] Setting ErrFile to fd 2...
	I0919 09:38:41.664802    2802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.664944    2802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:38:41.665963    2802 out.go:303] Setting JSON to false
	I0919 09:38:41.683285    2802 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":495,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:38:41.683368    2802 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:38:41.686947    2802 out.go:177] * [functional-085000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0919 09:38:41.697875    2802 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:38:41.694913    2802 notify.go:220] Checking for updates...
	I0919 09:38:41.705897    2802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:38:41.708850    2802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:38:41.712879    2802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:38:41.715908    2802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:38:41.718882    2802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:38:41.722199    2802 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:38:41.722475    2802 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:38:41.726874    2802 out.go:177] * Using the qemu2 driver based on existing profile
	I0919 09:38:41.733879    2802 start.go:298] selected driver: qemu2
	I0919 09:38:41.733886    2802 start.go:902] validating driver "qemu2" against &{Name:functional-085000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-085000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:38:41.733929    2802 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:38:41.739843    2802 out.go:177] 
	W0919 09:38:41.745888    2802 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 09:38:41.748925    2802 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-085000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-085000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.699583ms)

                                                
                                                
-- stdout --
	* [functional-085000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 09:38:41.881970    2813 out.go:296] Setting OutFile to fd 1 ...
	I0919 09:38:41.882094    2813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.882098    2813 out.go:309] Setting ErrFile to fd 2...
	I0919 09:38:41.882100    2813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 09:38:41.882223    2813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
	I0919 09:38:41.883523    2813 out.go:303] Setting JSON to false
	I0919 09:38:41.899549    2813 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":495,"bootTime":1695141026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0919 09:38:41.899643    2813 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0919 09:38:41.903898    2813 out.go:177] * [functional-085000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0919 09:38:41.910856    2813 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 09:38:41.914882    2813 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	I0919 09:38:41.910954    2813 notify.go:220] Checking for updates...
	I0919 09:38:41.921838    2813 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0919 09:38:41.924884    2813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 09:38:41.927919    2813 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	I0919 09:38:41.930848    2813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 09:38:41.934095    2813 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 09:38:41.934362    2813 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 09:38:41.938891    2813 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0919 09:38:41.945878    2813 start.go:298] selected driver: qemu2
	I0919 09:38:41.945883    2813 start.go:902] validating driver "qemu2" against &{Name:functional-085000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-085000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 09:38:41.945930    2813 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 09:38:41.951714    2813 out.go:177] 
	W0919 09:38:41.955921    2813 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 09:38:41.959842    2813 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8bcbb821-2435-4acb-9687-0a6a8fe6a1ce] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007056916s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-085000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-085000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-085000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c70c5083-578b-4bc9-9a41-28c1a86e2310] Pending
helpers_test.go:344: "sp-pod" [c70c5083-578b-4bc9-9a41-28c1a86e2310] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c70c5083-578b-4bc9-9a41-28c1a86e2310] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009867958s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-085000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-085000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de2778f7-9372-48ee-bd97-99aaa33b5737] Pending
helpers_test.go:344: "sp-pod" [de2778f7-9372-48ee-bd97-99aaa33b5737] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de2778f7-9372-48ee-bd97-99aaa33b5737] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008720833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-085000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh -n functional-085000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 cp functional-085000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3736025010/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh -n functional-085000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2051/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /etc/test/nested/copy/2051/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2051.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /etc/ssl/certs/2051.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2051.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /usr/share/ca-certificates/2051.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/20512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /etc/ssl/certs/20512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/20512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /usr/share/ca-certificates/20512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-085000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "sudo systemctl is-active crio": exit status 1 (65.073875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-085000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-085000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-085000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-085000 image ls --format short --alsologtostderr:
I0919 09:38:43.688380    2841 out.go:296] Setting OutFile to fd 1 ...
I0919 09:38:43.688554    2841 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.688557    2841 out.go:309] Setting ErrFile to fd 2...
I0919 09:38:43.688560    2841 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.688701    2841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:38:43.689196    2841 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.689256    2841 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.690176    2841 ssh_runner.go:195] Run: systemctl --version
I0919 09:38:43.690187    2841 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/id_rsa Username:docker}
I0919 09:38:43.721274    2841 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-085000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-085000 | 7fa39c05d6f23 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/localhost/my-image                | functional-085000 | 136adb799d4d9 | 1.41MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| gcr.io/google-containers/addon-resizer      | functional-085000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| docker.io/library/nginx                     | latest            | 91582cfffc2d0 | 192MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-085000 image ls --format table --alsologtostderr:
I0919 09:38:45.679875    2854 out.go:296] Setting OutFile to fd 1 ...
I0919 09:38:45.680064    2854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:45.680067    2854 out.go:309] Setting ErrFile to fd 2...
I0919 09:38:45.680069    2854 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:45.680209    2854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:38:45.680659    2854 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:45.680723    2854 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:45.681539    2854 ssh_runner.go:195] Run: systemctl --version
I0919 09:38:45.681548    2854 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/id_rsa Username:docker}
I0919 09:38:45.714507    2854 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/09/19 09:38:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-085000 image ls --format json --alsologtostderr:
[{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k
8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"7fa39c05d6f230a5c47141e60cdecab523b62d3926f2536ec97e7e0d2a2d0ea2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-085000"],"size":"30"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesu
i/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-085000"],"size":"32900000"},{"id":"136adb799d4d9cd9ada3f513cc9df8483f68eea297421f95e68e1dcf4393ca73","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-085000"],"size":"1410000"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"
],"size":"1410000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-085000 image ls --format json --alsologtostderr:
I0919 09:38:45.603012    2852 out.go:296] Setting OutFile to fd 1 ...
I0919 09:38:45.603162    2852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:45.603165    2852 out.go:309] Setting ErrFile to fd 2...
I0919 09:38:45.603167    2852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:45.603310    2852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:38:45.603779    2852 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:45.603838    2852 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:45.604719    2852 ssh_runner.go:195] Run: systemctl --version
I0919 09:38:45.604729    2852 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/id_rsa Username:docker}
I0919 09:38:45.635838    2852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-085000 image ls --format yaml --alsologtostderr:
- id: 7fa39c05d6f230a5c47141e60cdecab523b62d3926f2536ec97e7e0d2a2d0ea2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-085000
size: "30"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-085000
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-085000 image ls --format yaml --alsologtostderr:
I0919 09:38:43.761676    2843 out.go:296] Setting OutFile to fd 1 ...
I0919 09:38:43.761824    2843 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.761827    2843 out.go:309] Setting ErrFile to fd 2...
I0919 09:38:43.761830    2843 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.761964    2843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:38:43.762378    2843 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.762443    2843 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.763288    2843 ssh_runner.go:195] Run: systemctl --version
I0919 09:38:43.763297    2843 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/id_rsa Username:docker}
I0919 09:38:43.794613    2843 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh pgrep buildkitd: exit status 1 (62.798208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image build -t localhost/my-image:functional-085000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 image build -t localhost/my-image:functional-085000 testdata/build --alsologtostderr: (1.627131166s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-085000 image build -t localhost/my-image:functional-085000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in b9ef442af6cc
Removing intermediate container b9ef442af6cc
---> 67ba0dfce424
Step 3/3 : ADD content.txt /
---> 136adb799d4d
Successfully built 136adb799d4d
Successfully tagged localhost/my-image:functional-085000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-085000 image build -t localhost/my-image:functional-085000 testdata/build --alsologtostderr:
I0919 09:38:43.898610    2847 out.go:296] Setting OutFile to fd 1 ...
I0919 09:38:43.898812    2847 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.898817    2847 out.go:309] Setting ErrFile to fd 2...
I0919 09:38:43.898819    2847 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 09:38:43.898942    2847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17240-943/.minikube/bin
I0919 09:38:43.899393    2847 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.899811    2847 config.go:182] Loaded profile config "functional-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 09:38:43.900652    2847 ssh_runner.go:195] Run: systemctl --version
I0919 09:38:43.900664    2847 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17240-943/.minikube/machines/functional-085000/id_rsa Username:docker}
I0919 09:38:43.934484    2847 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3507109349.tar
I0919 09:38:43.934536    2847 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 09:38:43.937596    2847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3507109349.tar
I0919 09:38:43.939121    2847 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3507109349.tar: stat -c "%s %y" /var/lib/minikube/build/build.3507109349.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3507109349.tar': No such file or directory
I0919 09:38:43.939136    2847 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3507109349.tar --> /var/lib/minikube/build/build.3507109349.tar (3072 bytes)
I0919 09:38:43.946390    2847 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3507109349
I0919 09:38:43.949835    2847 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3507109349 -xf /var/lib/minikube/build/build.3507109349.tar
I0919 09:38:43.952971    2847 docker.go:339] Building image: /var/lib/minikube/build/build.3507109349
I0919 09:38:43.953008    2847 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-085000 /var/lib/minikube/build/build.3507109349
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0919 09:38:45.486280    2847 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-085000 /var/lib/minikube/build/build.3507109349: (1.533284333s)
I0919 09:38:45.486351    2847 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3507109349
I0919 09:38:45.489298    2847 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3507109349.tar
I0919 09:38:45.492139    2847 build_images.go:207] Built localhost/my-image:functional-085000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3507109349.tar
I0919 09:38:45.492156    2847 build_images.go:123] succeeded building to: functional-085000
I0919 09:38:45.492159    2847 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.773099917s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-085000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-085000 docker-env) && out/minikube-darwin-arm64 status -p functional-085000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-085000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-085000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-085000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-tc97p" [eea8b835-cda1-4aac-b659-533c9aed100d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-tc97p" [eea8b835-cda1-4aac-b659-533c9aed100d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.016843458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr: (2.131777458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr: (1.488873667s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.613744959s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-085000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-085000 image load --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr: (1.854977834s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image save gcr.io/google-containers/addon-resizer:functional-085000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image rm gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-085000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 image save --daemon gcr.io/google-containers/addon-resizer:functional-085000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-085000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-085000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [aa5dff49-3555-4e75-a209-10c5271f5acc] Pending
helpers_test.go:344: "nginx-svc" [aa5dff49-3555-4e75-a209-10c5271f5acc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [aa5dff49-3555-4e75-a209-10c5271f5acc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005998083s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service list -o json
functional_test.go:1493: Took "93.27175ms" to run "out/minikube-darwin-arm64 -p functional-085000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31225
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31225
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-085000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.213.14 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-085000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "115.805541ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.766292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "117.94875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.5965ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1194338047/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695141501944219000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1194338047/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695141501944219000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1194338047/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695141501944219000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1194338047/001/test-1695141501944219000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.783709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 16:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 16:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 16:38 test-1695141501944219000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh cat /mount-9p/test-1695141501944219000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-085000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4dae8ecf-0197-445d-acb1-603b28421fb4] Pending
helpers_test.go:344: "busybox-mount" [4dae8ecf-0197-445d-acb1-603b28421fb4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4dae8ecf-0197-445d-acb1-603b28421fb4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4dae8ecf-0197-445d-acb1-603b28421fb4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007655334s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-085000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1194338047/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1232754094/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.998333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1232754094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "sudo umount -f /mount-9p": exit status 1 (64.496208ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-085000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1232754094/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-085000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-085000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-085000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-964000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-964000 --driver=qemu2 : (30.111233875s)
--- PASS: TestImageBuild/serial/Setup (30.11s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-964000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-964000: (1.061450458s)
--- PASS: TestImageBuild/serial/NormalBuild (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-964000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-964000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (66.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-969000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-969000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m6.399399542s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (66.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons enable ingress --alsologtostderr -v=5: (17.910071416s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-969000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-095000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-095000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.677525625s)
--- PASS: TestJSONOutput/start/Command (43.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-095000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-095000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-095000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-095000 --output=json --user=testUser: (9.075239791s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-158000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-158000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.890417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8362e09d-44bc-4adf-8903-db4a45902676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-158000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"950d8307-08e9-4881-b9eb-db2b9d19350c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17240"}}
	{"specversion":"1.0","id":"8fc53256-f388-4cc4-a97d-ad1b984cb622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig"}}
	{"specversion":"1.0","id":"fa8e1191-2983-4729-9988-1b7ae0357cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ed059949-f19d-4120-b1f0-7f6b02c5b295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c88dbf80-a06a-4fff-95bb-e1ceb435ef69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube"}}
	{"specversion":"1.0","id":"437c1223-2c85-42f6-8835-e8abac1b5010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"86d8a2c4-6150-4f9d-a844-7b93f853c4c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-158000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-930000 --driver=qemu2 
E0919 09:42:44.391749    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.398516    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.410593    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.432664    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.474696    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.556786    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:44.718869    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:45.040937    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:45.683061    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:46.965402    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:49.527602    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:42:54.649654    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-930000 --driver=qemu2 : (29.806057875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-932000 --driver=qemu2 
E0919 09:43:04.890943    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
E0919 09:43:25.372777    2051 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17240-943/.minikube/profiles/functional-085000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-932000 --driver=qemu2 : (30.9440395s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-930000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-932000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-932000
helpers_test.go:175: Cleaning up "first-930000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-930000
--- PASS: TestMinikubeProfile (61.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-034000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.647084ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-034000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17240
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17240-943/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17240-943/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-034000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-034000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.245291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-034000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-034000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-034000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-034000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.305666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-034000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-404000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-404000 -n old-k8s-version-404000: exit status 7 (28.729417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-404000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-820000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-820000 -n no-preload-820000: exit status 7 (25.718667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-820000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-444000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-444000 -n embed-certs-444000: exit status 7 (27.161208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-444000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-645000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-645000 -n default-k8s-diff-port-645000: exit status 7 (27.025917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-645000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-104000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-104000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-104000 -n newest-cni-104000: exit status 7 (28.621542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-104000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1: exit status 1 (87.253792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (62.886875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (61.450083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (62.599917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (63.456041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (62.546541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-085000 ssh "findmnt -T" /mount2: exit status 1 (61.008708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-085000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3721660534/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.71s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-826000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-826000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-826000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-826000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-826000"

                                                
                                                
----------------------- debugLogs end: cilium-826000 [took: 2.079443584s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-826000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-826000
--- SKIP: TestNetworkPlugins/group/cilium (2.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-048000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard